In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for used
index in split ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Since the return value of the '.stats_reset' and '.xstats_reset'
callback function is int, when failing to issue command to firmware to
execute clear statistics, the relevant callback function should return
non-zero value.
Fixes: 8839c5e202 ("net/hns3: support device stats")
Cc: stable@dpdk.org
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Currently, based on hns3 VF device error may occur during initialization.
The root cause as below:
When the following formula is executed during initialization, the
private variable named hw->tqps_num has not been obtained from PF driver
through mailbox, further causes failure when mapping interrupt and
queues.
hw->num_msi = (num_msi > hw->tqps_num + 1) ? hw->tqps_num + 1 : num_msi;
We need to use hw->tqp_num after it is correctly assigned.
On the other hand, because the private variable named hw->num_msi, which
represents the number of MSI-x interrupt of hns3 PF/VF device, is used in
the '.get_reg' ops implementation function to dump all interrupt related
registers, it should be obtained from firmware directly and we'd better
not modify it in the driver.
Fixes: ef2e785c36 ("net/hns3: fix Tx interrupt when enabling Rx interrupt")
Fixes: 02a7b55657 ("net/hns3: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In current version, when upper level application calls the
rte_eth_dev_configure API function, if pvid config is not set of the
input parameter which struct type is rte_eth_conf, hns3 pmd driver also
sets the VLAN pvid related configuration to hardware, and this is not
reasonable. For example, As pvid is set to 100 by
rte_eth_dev_set_vlan_pvid, when pvid config is not set in rte_eth_conf,
rte_eth_dev_configure will tell driver to delete pvid 0, and that is
meaningless.
This patch fixes it to ensure that driver does not set VLAN pvid related
configuration to hardware when pvid config is not set in rte_eth_conf.
Fixes: 411d23b9ea ("net/hns3: support VLAN")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
The hns3 network engine is built-in multiple SoCs, such as kunpeng 920,
kunpeng 930, etc. The PCI revision id is 0x21 in kunpeng 920, and the PCI
revision id is 0x30 in kunpeng 930.
This patch gets PCI revision to identify different version of hardware
network engine.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Here adds some prints for return value when the relative function fails
and enter the exception branch.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
When upper level application calls the rte_eth_tx_burst API function to
send multiple packets at a time with burst mode based on hns3 network
engine, there are some abnormal conditions that cause the driver to fail
to operate the hardware to send packets correctly.
This patch adds some statistic counts for the abnormal errors of Tx data
path to the extend device statistics. The upper level application can
get them by calling the rte_eth_xstats_get API function.
Note: When using burst mode to call the rte_eth_tx_burst API function to
send multiple packets at a time. When the first abnormal error is
detected, add one to the relevant error statistics item, and then exit
the loop of sending multiple packets of the function. That is to say,
even if there are multiple packets in which abnormal errors may be
detected in the burst, the relevant error statistics in the driver will
only be increased by one.
The detail description of the Tx abnormal errors statistic items as
below:
- TX_OVER_LENGTH_PKT_CNT Total number of greater than
HNS3_MAX_FRAME_LEN the driver supported.
- TX_EXCEED_LIMITED_BD_PKT_CNT
Total number of exceeding the hardware limited bd which process a
packet needed bd numbers.
- TX_EXCEED_LIMITED_BD_PKT_REASSEMBLE_FAIL_CNT
Total number of exceeding the hardware limited bd fail which
process a packet needed bd numbers and reassemble fail.
- TX_UNSUPPORTED_TUNNEL_PKT_CNT
Total number of unsupported tunnel packet. The unsupported tunnel
type: vxlan_gpe, gtp, ipip and MPLSINUDP, MPLSINUDP is a packet
with MPLS-in-UDP RFC 7510 header.
- TX_QUEUE_FULL_CNT
Total count which the available bd numbers in current bd queue is
less than the bd numbers with the pkt process needed.
- TX_SHORT_PKT_PAD_FAIL_CNT
Total count which the packet length is less than minimum packet
size HNS3_MIN_PKT_SIZE and fail to be appended with 0.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
For sending request messages to data plane threads, the
caller invokes thread_msg_send_recv() function which never
returns null response. Thus, removed redundant check on
the returned response.
Coverity issue: 357717, 357772
Fixes: 70709c78fd ("net/softnic: add command to enable/disable pipeline")
Cc: stable@dpdk.org
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Make local MCDI helper functions static.
Fixes: f0bda0cd68 ("net/sfc/base: add MCDI wrappers for vPort and vSwitch in EVB")
Fixes: ea94d14dbe ("net/sfc/base: provide APIs to configure and reset vPort")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The maximal supported header modifications number of a single modify
context on the root table cannot be queried from firmware directly.
It is a fixed value of 16 in the latest releases. In the validation
stage, PMD driver should ensure that no more than 16 header modify
actions exist in a single context.
In some old firmware releases, the supported value is 8. PMD driver
should try its best to create the flow. Firmware will return error
and refuse to create the flow if the actions number exceeds the
maximal value.
Fixes: 72a944dba1 ("net/mlx5: fix header modify action validation")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the indexed memory pool bitmap start address is not aligned
to cacheline size explicitly. The bitmap initialization requires the
address should be cacheline aligned. In that case, the initialization
maybe failed if the address is not cacheline aligned.
Add RTE_CACHE_LINE_ROUNDUP() to the trunk size calculation to make sure
the bitmap offset address will start with cacheline aligned.
Fixes: a3cf59f56c ("net/mlx5: add indexed memory pool")
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Tested-by: Lijian Zhang <lijian.zhang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
The assert that checks if there is a enough room for the
whole packet minus headroom data is written incorrectly.
The check should be negated in order to work properly.
Fixes: bd0d5930bf ("net/mlx5: enable MPRQ multi-stride operations")
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The assert in dynamic flow metadata handling is wrong after the
fix for the performance degradation. The assert meant to check
the metadata mask but was updated with the metadata offset instead.
Fix this assert and restore proper metadata mask checking.
Fixes: 6c55b622a9 ("net/mlx5: set dynamic flow metadata in Rx queues")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Rewrite vectorized path selection logic. Default setting comes from
vectorized devarg, then checks each criteria.
Packed ring vectorized path need:
AVX512F and required extensions are supported by compiler and host
VERSION_1 and IN_ORDER features are negotiated
mergeable feature is not negotiated
LRO offloading is disabled
Split ring vectorized rx path need:
mergeable and IN_ORDER features are not negotiated
LRO, chksum and vlan strip offloadings are disabled
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Tx path like Rx path. Split Tx path into batch and
single Tx functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Rx path with SIMD instructions. Solution of
optimization is pretty like vhost, is that split path into batch and
single functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Move offload, xmit cleanup and packed xmit enqueue function to header
file. These functions will be reused by packed ring vectorized path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add new devarg for virtio user device vectorized path selection.
By default vectorized path is disabled.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Previously, virtio split ring vectorized path was enabled by default.
This is not suitable for everyone because that path does not follow
virtio spec. Add new devarg for virtio vectorized path selection. By
default vectorized path is disabled.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ring initialization is different when inorder feature negotiated. This
action should dependent on negotiated feature bits.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Introduce free threshold setting in Rx queue, its default value is 32.
Limit the threshold size to multiple of four as only vectorized packed
Rx function will utilize it. Virtio driver will rearm Rx queue when
more than rx_free_thresh descs were dequeued.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Currently, while creating the flow with meter, meter id is saved to the
rte flow. While destroying the flow, the meter object will be found by
the meter id, so the meter object will be released accordingly. But as
the meter id is configured by user, while the meter id is set to 0, it
doesn't make any sense to flow destroy since 0 means flow doesn't have
meter. The meter object with id 0 will be leaked.
As meter object is allocated from indexed memory, and the index starts
from 1, save the internal generated index instead of user defined meter
id will never meet the issue as above.
This patch saves meter index instead of meter id in rte flow.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The asserts makes sure that 'i' doesn't exceed the expected value.
This to prevent an out of bound access to dbr_bitmap.
The current location of the assert protects the assignment of
dbr_bitmap, but not the access to it.
Moved the assert to the correct place, to protect both cases.
Also, used an existing define for the assert.
Fixes: 21cae8580f ("net/mlx5: allocate door-bells via DevX")
Cc: stable@dpdk.org
Signed-off-by: Asaf Penso <asafp@mellanox.com>
Reviewed-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The output flow error parameter is used to indicate the detailed
reason of the failure when calling a rte_flow_* interface. Even
though sometimes the application will not check it or use it, the PMD
must fill it in the failure branch before returning. Or else, some
dirty value in the stack, heap will be accessed as a pointer and then
cause a crash.
In this case, when a port is stopped, it is not allowed to insert a
flow from application. The detailed error information should be
filled. If the application needs to check the detailed error reason,
it will get the information but not result in any crash.
Fixes: 40b9e7f65f ("net/mlx5: check device status before creating flow")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
After inserting an offload flow, the software flag information will
be updated based on the flow. When receiving a packet on this queue,
the hardware packet type bits and the software flag will be used
together to get the inner packet and tunnel header type (if any) from
the global packet type table.
When destroying a flow, the corresponding Rx queue flag needs to be
updated. All flags should be cleared when closing a device because
all control flows and application flows are invalid anymore.
Such behavior is missed when implementing the non-cached mode.
Fixes: 8db7e3b698 ("net/mlx5: change operations for non-cached flows")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
1. ln_en bit should not be turned on, since we only support Rx VEB.
2. lan_en bit need to be turned on for a DCF switch rule, otherwise
any Tx packet that hit on a rule will be dropped.
Fixes: fed0c5ca5f ("net/ice/base: support programming a new switch recipe")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
After VF reset, FDIR rule still takes effect. To solve the issue,
this patch adds to flush all flows before flow uninit. VIRTCHNL
sends message to PF by Admin Queue, so flow flush should be implemented
before Admin Queue shut down.
Fixes: ff2d0c345c ("net/iavf: support generic flow API")
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
When we flush FDIR filter, we can not call i40e_fdir_teardown()
function as it will free vsi used for FDIR, then the vsi->base_queue
will be freed from pf->qp_pool, but vsi->base_queue can only get
once when do dev init in i40e_pf_setup(). If we free it, it will
never be alloc again.
Bugzilla ID: 404
Fixes: 2e67a7fbf3 ("net/i40e: config flow director automatically")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Currently, flow "pattern eth type is 0x0806 / end actions mark id
0x86 / rss / end" can't be created successfully. FDIR parser
shouldn't deny RTE_ETHER_TYPE_ARP since ARP packets will be
parsed as PCTYPE_L2_PAYLOAD. This patch fixes the issue.
Bugzilla ID: 402
Fixes: 42044b69c6 ("net/i40e: support input set selection for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Ideally a rule with "TO VSI LIST" action should not be deleted when one
of the VF reset happens. The correct action by kernel PF driver is to
remove the VSI of a reset VF from the VSI list, but this is not
implemented in kernel PF yet, so workaround is the DCF to prevent a
rule with "To VSI List" action happens.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Wei Zhao <wei.zhao1@intel.com>
Updated the params list to include flush timer, this will
allow users to set the HW flush timer value in 10th of second.
Setting 0 will disable the pending cache flush feature.
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The ulp required to be changed to properly call the index table
management routines and use the index for external memory indices.
The ulp no longer has to account for stride as the tf_core returns the
actual offset, not a 0 based index.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Added support variable sized action records
- Additional error checking on table scope params
- Single external pool supported per direction
- Changed to return action record pointer
- Allows action pool to fully utilize the number of flows
Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
The resource function did not have a method of invalidating or
indicating that a resource is uninitialized. Added an invalid enum so
that processing works correctly for partially added flows.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Max Rx Ring count could be < Max stat contexts. While accounting
for stat contexts, this should be also considered and
the max ring count adjusted accordingly.
Fixes: f03e66cb64 ("net/bnxt: limit queue count for NS3/Stingray devices")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
The AltiVec header file breaks boolean type. [1] [2]
Currently the workaround was located only in mlx5 device.
Adding the trace module caused this issue to appear again, due to
order of includes, it keeps overriding the local fix.
This patch solves this issue by resetting the bool type, immediately
after it is being changed.
[1] https://mails.dpdk.org/archives/dev/2018-August/110281.html
[2]
In file included from
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:18:0,
from
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool.h:54,
from
dpdk/drivers/common/mlx5/mlx5_common_mr.c:7:
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h: In
function '__rte_trace_point_fp_is_enabled':
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:226:2:
error: incompatible types when returning type 'int' but '__vector __bool
int' was expected
return false;
^
In file included from
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:281:0,
from
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:18,
from
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool.h:54,
from
dpdk/drivers/common/mlx5/mlx5_common_mr.c:7:
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:
In function 'rte_mempool_trace_ops_dequeue_bulk':
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point_provider.h:104:6:
error: wrong type argument to unary exclamation mark
if (!__rte_trace_point_fp_is_enabled()) \
^
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:49:2:
note: in expansion of macro '__rte_trace_point_emit_header_fp'
__rte_trace_point_emit_header_##_mode(&__##_tp); \
^
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:99:2:
note: in expansion of macro '__RTE_TRACE_POINT'
__RTE_TRACE_POINT(fp, tp, args, __VA_ARGS__)
^
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:20:1:
note: in expansion of macro 'RTE_TRACE_POINT_FP'
RTE_TRACE_POINT_FP(
^
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:
In function 'rte_mempool_trace_ops_dequeue_contig_blocks':
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point_provider.h:104:6:
error: wrong type argument to unary exclamation mark
if (!__rte_trace_point_fp_is_enabled()) \
^
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:49:2:
note: in expansion of macro '__rte_trace_point_emit_header_fp'
__rte_trace_point_emit_header_##_mode(&__##_tp); \
^
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point.h:99:2:
note: in expansion of macro '__RTE_TRACE_POINT'
__RTE_TRACE_POINT(fp, tp, args, __VA_ARGS__)
^
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:29:1:
note: in expansion of macro 'RTE_TRACE_POINT_FP'
RTE_TRACE_POINT_FP(
^
dpdk/ppc_64-power8-linux-gcc/include/rte_mempool_trace_fp.h:
In function 'rte_mempool_trace_ops_enqueue_bulk':
dpdk/ppc_64-power8-linux-gcc/include/rte_trace_point_provider.h:104:6:
error: wrong type argument to unary exclamation mark
if (!__rte_trace_point_fp_is_enabled()) \
Fixes: 725f5dd0bf ("net/mlx5: fix build on PPC64")
Signed-off-by: Ori Kam <orika@mellanox.com>
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Tested-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
gcc 10.0.1 reports:
../drivers/net/avp/avp_ethdev.c: In function ‘avp_xmit_scattered_pkts’:
../drivers/net/avp/avp_ethdev.c:1791:24:
warning: ‘avp_bufs[count]’ may be used uninitialized in this function
[-Wmaybe-uninitialized]
1791 | tx_bufs[i] = avp_bufs[count];
| ~~~~~~~~^~~~~~~
../drivers/net/avp/avp_ethdev.c:1791:24:
warning: ‘avp_bufs[count]’ may be used uninitialized in this function
[-Wmaybe-uninitialized]
Fix by initializing the array.
Fixes: 295abce2d2 ("net/avp: add packet transmit functions")
Cc: stable@dpdk.org
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Steven Webster <steven.webster@windriver.com>
Change references to ABI 20.0.1 to use ABI v21, see
https://doc.dpdk.org/guides/contributing/abi_policy.html#general-guidelines
"Major ABI versions are declared no more frequently than yearly.
Compatibility with the major ABI version is mandatory in subsequent
releases until a new major ABI version is declared."
Combined ABI policy and versioning in maintainers, add map files to the
filter to more closely monitor future ABI changes.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
bnxt_free_one_vnic and bnxt_setup_one_vnic are called on configuring
port vlan stripping. bnxt_setup_one_vnic keeps incrementing the
vnic rx_queue_cnt. Fix to reset vnic rx_queue_cnt in bnxt_free_one_vnic.
Fixes: cfadfee41e ("net/bnxt: fix VLAN strip")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
During port 0 rxq 1 start ie queue start,
bnxt_free_hwrm_rx_ring() we are clearing the pointers to mbuf array.
Due to this we overwrite the queue with fresh mbuf allocations
causing previously allocated mbufs to leak.
Add a check before allocating mbuf to replenish only empty mbuf slots
in the RxQ.
Fixes: 2eb53b134a ("net/bnxt: add initial Rx code")
Cc: stable@dpdk.org
Signed-off-by: Rahul Gupta <rahul.gupta@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
HWRM_PORT_MAC_QCFG is not supported on a VF. Added a PF check
in bnxt_hwrm_port_mac_qcfg() to prevent the probe failure on a VF.
Fixes: f6e250d21a ("net/bnxt: fetch SVIF information from firmware")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Firmware reports any fatal error (either ASIC or Firmware related) via a
new status register. This status register can provide more detailed
information about the firmware errors, especially if error occurs before
HWRM_VER_GET is issued. Attempt to map this register if it is present
and check for firmware status when VER_GET command fails.
Refactored the code to allocate the "bp->recovery_info" structure
in bnxt_init_fw() instead of doing in bnxt_hwrm_error_recovery_qcfg().
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Driver destroys the vnic when the port is brought down.
When user tries to add a vlan when port is stopped, driver
issues HWRM command to FW with invalid vnic_id and it fails.
Fixed to return an error while setting vlan when port is
not started.
Fixes: b4e190d55c ("net/bnxt: fix MAC/VLAN filter allocation")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Fixed to invoke clean up in the reverse sequence of
initialization in case any of the FW commands fail
during port start.
Fixes: 0b53359123 ("net/bnxt: inform firmware about IF state changes")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
FW returns HWRM_ERR_CODE_HOT_RESET_PROGRESS(0xa) when it is
unable to process a specific cmd while hot reset is in progress.
Host driver is expected to keep retrying the cmd for 2s with
a gap of 50ms between each retrial.
Also, fixed to fail port start if the HWRM_FUNC_DRV_IF_CHANGE
still returns error after 2 seconds.
Fixes: 0b53359123 ("net/bnxt: inform firmware about IF state changes")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Added information about supported speeds for the port in the
"dev_infos_get". As other PMDs are returning the speed capabilities,
apps may expect this behavior from bnxt PMD.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use PCI_PRI_FMT instead of "%04x:%02x:%02x:%02x" print format.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
There is no ENODATA in the errno.h in BSD.
Use a common errno to return error.
Fixes: 69c410b844 ("net/bnxt: support EM/EEM")
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
If there's VF reset, the kernel PF will remove rules
associated with the reset VF no matter the HW VSI ID
is changed or not. So DCF should redirector all rules
associated with the reset VF no matter the HW VSI ID
is changed or not.
Fixes: 3b3757bda3 ("net/ice: get VF hardware index in DCF")
Fixes: c8183dd8e0 ("net/ice: redirect switch rule to new VSI")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
The iavf_dev_stats_get function should return ret instead of -eio.
Fixes: f4a41a6953 ("net/avf: support stats")
Cc: stable@dpdk.org
Signed-off-by: Cheng Peng <cheng.peng5@zte.com.cn>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
The meson build file does not enable i40e vectorization support for
PPC/altivec systems, even though the existing Makefile does enable the
support. Add the required architecture check and sources line.
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
It's a normal behavior to change the link status to up after
resetting the port. So it is unnecessary to set link down before
starting port, and changing the link state(link up/down) frequently
will cause link speed unstable.
Fixes: c3f2fbff78 ("net/ixgbe: fix link status")
Cc: stable@dpdk.org
Signed-off-by: Shougang Wang <shougangx.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Tested-by: Xueming Zhang <xuemingx.zhang@intel.com>
Add switch filter support for AH ESP and L2TP protocol,
and use SPI or session idas input set for switch rule.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add dummy packet and tunnel type to support
L2TP on switch, now we can use session id as
input set for switch rule.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add dummy packet and tunnel type to support
AH ESP and NAT-T on switch, now we can use SPI as
input set for switch rule.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When the thread exits normally, pthread_join() is not called, which can
result in a resource leak. Therefore, the thread is set to separation
mode using function pthread_detach(), so that no program call
pthread_join() is required to recycle, and when the thread exits,
the system automatically reclaims resources.
Wait for the thread to finish with timeout argument(0 means that it will
not return until link complete), wait until the thread finishes before
returning. Normally, the thread will finish in a shorter time, and give
a warning message if it hasn't finished in a longer time.
Fixes: 819d0d1d57 ("net/ixgbe: fix blocking system events")
Cc: stable@dpdk.org
Signed-off-by: Tao Zhu <taox.zhu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
MPRQ is silently turned off in case there is not enough
Rx queues configured. Improve the logging to show a
warning in this case to notify a user about the Rx burst
function selected.
Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Using a global mbuf dynamic field for metadata incurs some
performance penalty on a datapath. Store this information in
the Rx queue descriptor for a better cache locality.
Fixes: a18ac61133 ("net/mlx5: add metadata support to Rx datapath")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The header modify actions number supported now has some limitation,
and it is decided by both driver and hardware. If the configuration
is different or the table to insert the flow is different, the result
might be different if the flow contains header modify actions.
Currently, the actual action number could only be calculated in the
later stage called translate, from user specified value to the driver
format. And the action numbers checking is missed in the flow
validation. So PMD will return incorrect result to indicate the
flow actions are valid by rte_flow_validate but then it will fail
when calling rte_flow_create.
Adding some simple checking in the validation will help to get rid
of this incorrect checking. Most of the actions will only consume 1
SW action field except the MAC address and IPv6 address. And from
SW POV, the maximal action fields for these will be consumed even if
only part of such field will be modified because that there is no
mask in the flow actions and the mask will always be all ONEs.
The metering or extra metadata supports will cost one more action.
Fixes: 9597330c68 ("net/mlx5: update modify header action translator")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The meters of ports share the same meter table on
the port. When releasing meters, don't check value returned
using assert. Because other meters may reference to it.
Fixes: 46a5e6bc6a ("net/mlx5: prepare meter flow tables")
Fixes: 9dbaf7eef6 ("net/mlx5: fix meter suffix table leak")
Cc: stable@dpdk.org
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch adds getting Rx/Tx queue fbd information in extended device
statistics. The upper level application can get them by calling the
rte_eth_xstats_get API function.
The fbd registers of every Rx/Tx queue are very useful to identify the
Rx/Tx bottleneck.
1. The Rx queue fbd register is the number of the unprocessed buffer
descriptors which are waiting for driver to process;
2. The Tx queue fbd register is the number of the unprocessed buffer
descriptors which are waiting for network engine hardware to process.
As a result, we get the following output information in testpmd
application by using the command "show port xstats" as below:
rx_q0RX_QUEUE_FBD: 19
rx_q1RX_QUEUE_FBD: 18
tx_q0TX_QUEUE_FBD: 0
tx_q1TX_QUEUE_FBD: 0
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
This patch modifies the print format for firmware version in the log, It
replaces "0x%08x" with "%lu.%lu.%lu.%lu" in the format control string.
By the way, this patch adds ".fw_version_get" ops implemation for hns3
VF PMD driver.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
The VF must be capable of configuring RSS. Add a virtchnl handler to
parse a specific RSS configuration, and process the configuration for
VFs, such as add or delete a RSS rule.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
This commit focus on flow meter data structures
optimization: mlx5_flow_meter.
Optimize memory consumption of flow meter data structure.
Reorganize flow meter data structure,delete unnecessary
data fields.
Signed-off-by: Wentao Cui <wentaoc@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch enables mark action support and takes mark only case
into consideration.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
This patch enables PFCP node and session packets with S_FIELD
for flow director filter.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
This patch enables L2TPv3 with SESSION_ID, ESP/AH with SPI, NAT-T
with SPI and IP src/dst for flow director filter.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
This patch enables GTPU with TEID and QFI for flow director filter.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds FDIR create/destroy/validate function in AVF.
Common pattern and queue/qgroup/passthru/drop actions are supported.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Align the package file search sequence with PF only for DCF mode. Get
the DSN through the virtual channel firstly to check the accessibility
of the package file.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Support RSS hash parsing from Flex Rx
descriptor in SSE data path.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support RSS hash parsing from Flex Rx
descriptor in AVX data path.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support Flow Director mark ID parsing from Flex
Rx descriptor in SSE path.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support Flow Director mark ID parsing from Flex
Rx descriptor in AVX path.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
The commit adds fdir_enabled flag into iavf_rx_queue structure
to identify if fdir id is active. Rx data path can be benefit if
fdir id parsing is not needed, especially in vector path.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support flexible Rx descriptor format in SSE
path of iAVF PMD.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support flexible Rx descriptor format in AVX
path of iAVF PMD.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Support flexible Rx descriptor format in normal
path of iAVF PMD.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Adds error return when the opcode of read message is
mismatched which is received from adminQ.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Currently, the rte flow structure is not fully aligned and has some
bits wasted. The members can be optimized and reorganized to save
memory.
1. The drv_type uses only limited bits, change the type to 2 bits what
it needs.
2. Align the hairpin_flow_id, drv_type, fdir, copy_applied to 32 bits.
As hairpin never uses the full 32 bits.
3. __rte_packed helps tight up the structure memory layout.
The optimization totally helps save 14 bytes for the structure.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit allocates rte flow from indexed memory pool.
Allocate rte flow memory from indexed memory pool helps save more than
MALLOC_ELEM_OVERHEAD bytes memory from rte_malloc().
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When destroy the flow with RSS, flow can invoke the queues information
from hrxq index table object, since the queue number and list are both
saved to the index table object. No need to save the duplicated data in
rte flow.
Save the RSS description information to the intermediate private data
when create the flow with RSS action helps to save the memory for rte
flow.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit is for mlx5 fdir flow memory optimization.
Currently for the fdir member in rte_flow structure. It saves the fdir
memory pointer directly. As fdir is fading away, use one bit help to
indicate the function in the flow and add the content to an extra list
save the memory for the other widely usage cases.
Signed-off-by: Wentao Cui <wentaoc@mellanox.com>
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Allocate mark copy resource from indexed pool helps rte flow saves the 4
bytes index instead of 8 bytes pointer. For mark copy resource itself,
it helps save MALLOC_ELEM_OVERHEAD bytes from rte_malloc().
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch allocate the meter object memory from indexed memory pool
which will help to save the MALLOC_ELEM_OVERHEAD memory taken by
rte_malloc().
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
While flow attaches the meter handle, the meter id can be the unique tag
for the flow to get the meter handle. It's no need for flow to save the
pointer of the meter handle.
Save the meter id instead of pointer helps reduce the size for rte flow
structure.
As the supported maximum meter rule is 4K, uint16_t type is selected for
the meter id.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the mlx5_flow_handle struct is not fully aligned and has some
bits wasted. The members can be optimized and reorganized to save memory.
1. As metadata and meter is sharing the same flow match id, now the flow
id is limited to 24 bits due to the 8 MSBs are used as for the meter
color. Align the flow id to other bit members to 32 bits to save the
mlx5 flow handle memory.
2. The vlan_vf in struct mlx5_flow_handle_dv was already moved to struct
mlx5_flow_handle. Remove the legacy vlan_vf in struct
mlx5_flow_handle_dv.
3. Reorganize the vlan_vf in mlx5_flow_handle with member SILIST_ENTRY
next to make it align with 8 bytes.
4. Reorganize the header modify in mlx5_flow_handle_dv to ILIST_ENTRY
next to make it align to with bytes.
5. Introduce __rte_pack attribute to make the struct tightly organized.
It will totally save 20 bytes memory for mlx5_flow_handle struct.
For the resource objects which are converted to indexed, align the names
with the prefix of rix_.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
As only limited bits is used in act_flags for flow destroy, it's a bit
expensive to save the whole 64 bits. Move the act_flags out of flow
handle and save the needed bits for flow destroy to save some bytes for
the flow handle data struct.
The fate action type and mark bits are reserved as they will be used in
flow destroy.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, one flow only has one fate action, the fate actions members
in the flow struct can be reorganized as union to save the memory for
flow struct.
This commit reorganizes the fate actions as union, the act_flags helps
to identify the fate action type when flow destroys.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts flow dev handle to indexed.
Change the mlx5 flow handle from pointer to uint32_t saves memory for
flow. With million flow, it saves several MBytes memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts hrxq to indexed.
Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit convert jump resource to indexed.
The table data struct is allocated from indexed memory. As it is add in
the hash list, the pointer is still used for hash list search. The index
is added to the table struct, and the pointer in flow handle is decrease
to uint32_t type. For flow without jump flows, it saves 4 bytes memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts port id action to indexed.
Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit convert tag resource to indexed.
As tag resources are add in the hash list, to avoid introduce
performance issue and keep the hash list, only the tag resource memory
is allocated from indexed memory. The resources is still added to the
hash list. Add four bytes index in the tag resource struct and change
the tag resources in the flow handle from pointer to uint32_t seems be
no benefit for tag resource, but it saves memory for flows without tag
action. And also for sub flows share one tag action resource.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts the push VLAN resource to indexed.
Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit converts the flow encap/decap resource to indexed.
Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
While entries are fully freed in trunk, it means the trunk is free now.
User may prefer the free trunk memory can be reclaimed.
Add the trunk release memory option for indexed pool in this case.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit add trunk dynamic grow for the indexed pool.
In case for pools which are not sure the entry number needed, pools can
be configured in increase progressively mode. It means the trunk size
will be increased dynamically one after one, then reach a stable value.
It saves memory to avoid allocate a very big trunk at beginning.
User should set both the grow_shift and grow_trunk to help the trunk
grow works. Keep one or both grow_shift and grow_trunk as 0 makes the
trunk work as fixed size.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the memory allocated by rte_malloc() also introduced more
than 64 bytes overhead. It means when allocate 64 bytes memory, the
real cost in memory maybe double. And the libc malloc() overhead is 16
bytes, If users try allocating millions of small memory blocks, the
overhead costing maybe huge. And save the memory pointer will also be
quite expensive.
Indexed memory pool is introduced to save the memory for allocating
huge amount of small memory blocks. The indexed memory uses trunk and
bitmap to manage the memory entries. While the pool is empty, the trunk
slot contains memory entry array will be allocated firstly. The bitmap
in the trunk records the entry allocation. The offset of trunk slot in
the pool and the offset of memory entry in the trunk slot compose the
index for the memory entry. So, by the index, it will be very easy to
address the memory of the entry. User saves the 32 bits index for the
memory resource instead of the 64 bits pointer.
User should create different pools for allocating different size of
small memory block. It means one pool provides one fixed size of small
memory blocked allocating.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The fd is possibly a negative value while it is passed as an
argument to function "close". Fix the check to the fd.
Fixes: ed8132e7c9 ("net/tap: move fds of queues to be in process private")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The port database is a repository of the port details
it is used by the ulp code to query any port related details.
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
In order to re-use allocated resources and reduce search complexity for
simple keys, a generic software cache table was added for the TCAM. The
implementation is specifically only for keys that can be compressed to
less than 16 bits. The keys are generated using the same mechanisms as
other search tables, but the table type is set to a cache that mirrors
the actual TCAM table. The allocated result fields are stored in the
cache entry and can be used for subsequent searches in future tables.
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
- Added ulp_mapper_init/deinit to allocate/deallocate mapper data for
storing the default identifiers
- Modified the template_db to include the new opcode for accessing the
default ids.
- Modified the result and key field builders to use the new opcode for
writing the default ids into blobs
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
The ulp flow flush has been extended to support session flow
flush and function flow flush. The session flow flush is called when
there the device is sole owner of the session and it deletes all the
flows associated with that session. The function flow flush is
called if the device function is not the sole owner of the session,
it deletes all the flows that are associated with that device
function.
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Currently, all the flow templates are sequentially searched to find out
whether there is a matching template for the incoming RTE_FLOW offload
request. As sequential search will have performance concerns, this
patch will address it by using hash algorithm to find out the flow
template. This change resulted in creation of computed fields to
remove the fields that do not participate in the hash calculations.
The field bitmap is created for this purpose.
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
The changes are to the ulp mapper flow_create, the API changed
to take the bnxt_ulp_mapper_create_parms structure instead of individual
fields.
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
The changes are to the ulp rte parser, the API are changed
to take the parser param structure instead of individual
fields.
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Added the name of the resource to the index/result and key/mask common
builder functions.
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
This API can be used to iterate individual resource
functions in the flow database.
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Allow the flow db resources to be more effectively utilized.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Modification of the parser to get the SVIF from the driver for matches
on port_id, pf, and phy_port.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch allows to display flow stats in extended stats.
To do this, DMA-able memory is registered with the FW during device
initialization. Then the driver uses an alarm thread to query the
per flow stats using the HWRM_CFA_COUNTER_QSTATS HWRM command at
regular intervals and stores it locally which will be displayed
when the application queries the xstats.
The DMA-able memory is unregistered during driver cleanup.
This functionality can be enabled using the flow-xstat devarg and
will be disabled by default. The intention behind this is to allow
stats to be displayed for all the flows in one shot instead of
querying one at a time.
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
HWRM API allows drivers to query stats per PCI function.
These stats can provide some useful information in certain
circumstances.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
This patch adds a support of VIRTIO_NET_F_SPEED_DUPLEX feature
for virtio driver.
There are two ways to specify speed of the link:
'speed' devarg
negotiate speed from qemu via VIRTIO_NET_F_SPEED_DUPLEX
The highest priority is devarg. If devarg is not specified,
drivers tries to negotiate it from qemu.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
virtio driver already parses speed devarg. virtio-user should add
it to list of valid devargs and call eth_virtio_dev_init function
which init speed value.
eth_virtio_dev_init already is called from virtio_user_pmd_probe
function. The only change is required to enable speed devargs:
adding speed to list of valid devargs.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
strtoull returns 0 if it fails to parse input string. It's ignored
in get_integer_arg.
This patch handles error cases for strtoull function.
Fixes: ce2eabdd43 ("net/virtio-user: add virtual device")
Cc: stable@dpdk.org
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Some applications like pktgen use link speed to calculate
transmission rate. It limits outcome traffic to hardcoded 10G.
This patch adds speed devarg which allows to configure
link speed of virtio device.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Refactor vdpa specific devargs parsing to more generic way.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When a flow is offloaded with MARK action (RTE_FLOW_ACTION_TYPE_MARK),
each packet of that flow will have metadata set in its completion.
This metadata will be used to fetch an index into a mark table where
the actual MARK for that flow is stored. Fetch the MARK from the mark
table and inject it into packet’s mbuf.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
If bp->truflow is not set then don't enable vector mode.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the rte_flow table associated with this session
3. Iterates through all the flows in the flow table
4. Calls ulp_mapper_resources_free which releases the key & action
tables associated with each flow
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the flow associated with the flow id from the flow table
3. Calls ulp_mapper_resources_free which releases the key & action
tables associated with that flow
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, returns success otherwise failure
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, calls ulp_mapper_flow_create to program
key & action tables
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Registers a callback handler for each rte_flow_action type, if
it is supported
2. Iterates through each rte_flow_action till RTE_FLOW_ACTION_TYPE_END
3. Invokes the action call back handler
4. Each action call back handler will populate the respective fields in
act_details & act_bitmap
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1. Registers a callback handler for each rte_flow_item type, if it
is supported
2. Iterates through each rte_flow_item till RTE_FLOW_ITEM_TYPE_END
3. Invokes the header call back handler
4. Each header call back handler will populate the respective fields
in hdr_field & hdr_bitmap
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Takes act_bitmap generated from the rte_flow_actions
2. Iterates through the static act_bitmap list
3. Returns success if a match is found, otherwise an error
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Takes hdr_bitmap generated from the rte_flow_items
2. Iterates through the static hdr_bitmap list
3. Returns success if a match is found, otherwise an error
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Gets the action tables information from the action template id
2. Gets the class tables information from the class template id
3. Initializes the registry file
4. Allocates a flow id from the flow table
5. Process the class & action tables
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch does the following
1. Gets all the flow resources from the flow id
2. Frees all the table resources
3. Frees the flow in the flow table
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch processes the action template. Iterates through the list
of action info templates and processes it.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.
This patch adds support for cleaning up resources initialized for ULP
sessions.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This feature can be enabled by passing
"-w 0000:0d:00.0,host-based-truflow=1” to the DPDK application.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
VNIC is needed for the driver to program the action record for rx
flows. VNIC determines what receive rings to use to place the received
packets. This patch introduces a routine that will convert a given
dpdk port to VNIC.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
SVIF (source virtual interface) is used to represent a physical port,
physical function, or a virtual function. SVIF is compared during L2
context and exact match lookups in TX direction. SVIF is masked for
port information during L2 context and exact match lookup in RX direction.
Hence, driver needs this SVIF information to program L2 context and Exact
match tables.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
vastly increased scale by using host DDR, with performance
trade-off due to the need to access off-chip memory.
Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Added TruFlow Table public API
- Added Table Scope capability including Table Type support code for
setting and getting Table Types.
Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow TCAM public API functions
- Add TCAM support functions as well as public APIs.
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow Identifier resource support
- Add TruFlow public API for Identifier resources.
- Add support code and stack for Identifier resource allocation control.
Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow RM functionality for resource handling
- Update the TruFlow Resource Manager (RM) with resource
support functions for debugging as well as resource cleanup.
- Add support for Internal and external pools.
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow public API definitions for resources
as well as RM infrastructure
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow session resource support functionality
- Add TruFlow session hw flush capability as well as
sram support functions.
- Add resource definitions for session pools.
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Add TruFlow session and resource support functions
- Add Truflow session close API and related message support functions
for both session and hw resources
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
- Change HWRM_PREP to use pointer and use the full
HWRM enum
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The nic's interrupt source has some active handler, which maybe call
tap_dev_intr_handler() to set link handler. We should cancel the link
handler before close fd to prevent executing the link handler. It
triggers segfault.
Call Trace:
0x00007f15e08dad99 in __rte_panic (Error adding fd %d epoll_ctl, %s\n")
0x00007f15e08e9b87 in eal_intr_thread_main ()
0x00007f15e249be15 in start_thread ()
0x00007f15d5322f9d in clone ()
Fixes: c0bddd3a05 ("net/tap: add link status notification")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add ESP patterns to i40e_flow_parse_rss_pattern().
Update i40e PMD user guide with download link for esp-ah.pkg file.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
When eth_dev_tap_create() is failed, nlsk_fd and ka_fd won't be closed
thus leading fds leak. Zero is a valid fd. Ultimately leads to a valid
fd was closed by mistake.
Fixes: bf7b7f437b ("net/tap: create netdevice during probing")
Fixes: cb7e68da63 ("net/tap: fix cleanup on allocation failure")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
The internal structure is freed and set to NULL in the
rte_eth_dev_release_port() and zero is a valid fd. Ultimately
leads to a valid fd was closed by mistake.
Fixes: 3101191c63 ("net/tap: fix device removal when no queue exist")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Now the rxq->pool is mbuf concatenation, but its nb_segs is 1. When
conducting some sanity checks on the mbuf with debug enabled, it fails.
Fixes: 0781f5762c ("net/tap: support segmented mbufs")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
For the tap PMD, we should release mbufs and iovecs from the Rx queue
when closing device. In order to remove duplicated code,
rte_pmd_tap_remove() calls tap_dev_close().
Fixes: 0781f5762c ("net/tap: support segmented mbufs")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
When the tap_write_mbufs() function return with break, mbuf was freed
without increasing num_packets, which could cause applications to free
the mbuf again. And the pmd_tx_burst() function should returns the
number of original packets it actually sent excluding tso mbufs.
Fixes: 9396ad3346 ("net/tap: fix reported number of Tx packets")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Due the limitation of HW, when PMD create push VLAN action it needs to
know what exactly the value of VID/PCP.
PMD try to figure out them via:
- of_set_vlan_vid/pcp actions
- VLAN item in pattern
If none of above is provided, default value - zero is used.
However user will write rule like [1] which match on a range of VID and
without of_set_vlan_vid action and expect the VID will inherit from
original packet. This is not supported by HW currently. PMD will set VID
to default value - zero because it cannot figure out the exact value of
VID from VLAN item.
This is sort of misleading for some users.
In order to avoid this, PMD will spit out error for rule like [1] to
force user to provide explicit VID/PCP for new pushed VLAN headers.
[1]: testpmd> flow create 2 ingress transfer group 0 priority 3 pattern
eth / vlan vid spec 2859 vid prefix 4 / ipv4 / end
actcions of_push_vlan ethertype 0x88A8 /
of_set_vlan_pcp vlan_pcp 6 / port_id id 0 / end
Fixes: 9aee7a8418 ("net/mlx5: support push flow action on VLAN header")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Reviewed-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently when PMD create push VLAN action it need to provide VID to HW
and PMD get VID value from item VLAN in pattern if there is no
of_set_vlan_vid action following.
When user create rule like [1], which has of_set_vlan_vid action
before of_push_vlan, the intention is to modify VID on existing VLAN
header and push a new VLAN header with VID _inherit_ from the previous
of_set_vlan_vid.
Currently the above is not covered by PMD, PMD always fetch the VLAN
information from item for of_push_vlan action.
Fix it by only fetch VLAN information from item when there is no
previous of_set_vlan_vid action.
[1]: testpmd> flow create 2 ingress transfer group 1 priority 3 pattern
eth / vlan vid is 2731 / ipv4 / end actions
of_set_vlan_vid vlan_vid 3209 / of_push_vlan ethertype
0x88A8 / port_id id 1 / end
Fixes: b8c0372bc5 ("net/mlx5: fix set VLAN ID/PCP in new header")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Reviewed-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
1400 series adapters support multiple MARK and FLAG action types.
e.g.: mark id 10 / queue index 2 / mark id 11 / queue index 3
Remove the restriction in the Flow Manager implementation.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Support rte_flow RSS action on outer headers (level 0). RSS ranges on
the non-default port is OK.
Restrictions:
- The RETA is ignored. The hash function is simply applied across
the RSS queue range.
- The queues used in the RSS group must be sequential.
- There is a performance hit if the number of queues is not a power
of 2.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Each RTE RQ is represented on enic as a Start Of Packet (SOP) queue
and overflow queue (DATA). There were arranged SOP0/DATA0, SOP1/DATA1,..
But need to be arranged SOP0, SOP1,..., DATA0, DATA1... so that
rte_flow RSS queue ranges work.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Update the VIC Flow Manager API. The extensions will allow support for:
- Decap and strip VLAN
- Remove outer VLAN
- Set Egress port
- Set VLAN when replicating encapped packets
- RSS queue ranges on outer header
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
The current implementation produces wrong ordering for several cases
like these:
1. mark, decap, steer
Current: steer, mark, decap
Correct: mark, steer, decap
2. decap, steer, steer
Current: steer, steer, decap
Correct: steer, decap, steer
Simplify the logic and swap 1st steer and decap.
Also, allow just one decap action per flow.
Fixes: ea7768b5bb ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Refactor common memory btree and cache management to common driver.
Replace some input parameters of MR APIs to more common data structure
like PD, port_id, share_cache,... so that multiple PMD drivers can
use those MR APIs.
Modify mlx5 net pmd driver to use MR management APIs from common driver.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Refactor common multi-process handling codes from net PMD to common
driver. Using tuple mp_id{name, port_id} as standard input parameter
for all multi-process IPC APIs instead of using rte_eth_dev.
Modify net PMD to use multi-process APIs from common driver.
Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, when translate jump action, the table reference will be
increased all the time. But when release the jump action, the table
resource reference will only be decreased when jump action is released.
It means for jump action which was referenced more than one time, the
increased table reference only decrease one time when jump action is
released.
Add table release when the jump action was not new created.
Fixes: 684b9a1b1f ("net/mlx5: support jump action")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the meter suffix table is created and saved in the mlx5
shared struct. It causes the suffix table will never be released
even without any meter rules.
Move the suffix table to meter domain struct to help the suffix table
be released when all the meter rules are destroyed.
Fixes: 46a5e6bc6a ("net/mlx5: prepare meter flow tables")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The multi-stride operations now allow to reduce a stride size
while supporting Jumbo frames. That means that it is possible
to have mbufs configured with a size smaller than the whole
packet received. It is not an issue during normal MPRQ operations
since we attach external buffers instead of copying the data
into the mbuf itself. But it is not the case in "emergency mode"
when we have to copy every packet because of no more external
mbufs are available. Assemble a multi-segment packet to overcome
this issue in case scatter mode is enabled, drop a packet if not.
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
MPRQ feature should be updated to allow a packet to be received
into multiple strides in order to support the MTU exceeding 8KB.
Special care is needed to prevent the headroom corruption in the
multi-stride mode since the headroom space is borrowed by the PMD
from the tail of the preceding stride. Copy the whole packet into
a separate mbuf in this case or just the overlapping data if the
Rx scattering is supported by an application.
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Define a device parameter to configure log 2 of a stride size for MPRQ
- mprq_log_stride_size. User is able to specify a stride size in a range
allowed by an underlying hardware. The default stride size is defined as
2048 bytes to encompass most commonly used packet sizes in the Internet
(MTU 1518 and less) and will be used in case a maximum configured packet
size cannot fit into the largest possible stride size. Otherwise a
stride size is set to a large enough value to encompass a whole packet.
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
When we use profile rule as switch rule to download, if
we download 2 different rules one by one, there will be
rejection from function ice_aq_sw_rules(), for example:
"flow create 0 priority 0 ingress pattern eth / ipv6 / ah
/ end actions queue index 3 / end"
"flow create 0 priority 0 ingress pattern eth / ipv6 / esp
/ end actions queue index 2 / end"
That is because the 2 rules has the same s_rule input set
except action queue index, so it will be rejected by
hardware. So we have to use different recipes for them.
Also, we need to add recipe_id to keep record of recipe
index, which will be used in rule remove, if not, there
will be error when search recipe in function
ice_rem_adv_rule() if we create 2 or more profile rule.
For example:
"flow create 0 priority 0 ingress pattern eth / ipv4 / udp
/ pfcp s_field is 1 / end actions queue index 4 / end"
"flow create 0 priority 0 ingress pattern eth / ipv4 / udp
/ pfcp s_field is 0 / end actions queue index 5 / end"
then,
"flow flush 0"
you will find only the first rule will be delete,
because ice_find_recp() will always return recipe
id of the first rule.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Yuan Peng <yuan.peng@intel.com>
In order to find accurate recipe for switch filter, we
need to add mask as an element when searching for recipe.
If we create different rules with the same input set, but
using different masks, then proper recipes should use
those different mask.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Nannan Lu <nannan.lu@intel.com>
When we add some long switch rule, we need check the
number of final recipe number, if it is large than
ICE_MAX_CHAIN_RECIPE, we should refuse this rule.
For example:
"flow create 0 ingress pattern eth / ipv6
src is CDCD:910A:2222:5498:8475:1111:3900:1536
dst is CDCD:910A:2222:5498:8475:1111:3900:2022
tc is 3 / udp dst is 45 / end actions queue index 2 / end"
This rule will consume 6 recipe, if it is not refused, it
will cause the following code over write of lkup_indx and mask.
LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry,
l_entry) {
last_chain_entry->fv_idx[i] = entry->chain_idx;
buf[recps].content.lkup_indx[i] = entry->chain_idx;
buf[recps].content.mask[i++] = CPU_TO_LE16(0xFFFF);
..........
}
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Nannan Lu <nannan.lu@intel.com>
Restrict pointer aliasing to optimize the code generated.
The patch showed ~3% performance uplift on Arm N1SDP platform, and no
degradation on ThunderX2. The tet case is RFC2544 zero-loss L2
forwarding running testpmd.
[1] https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Restricted-Pointers.html
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
To keep ordering of mixed accesses, 'DMB OSH' is sufficient.
'DSB' inside the I40E_PCI_REG_WRITE is overkill.[1]
This patch fixes by replacing with just sufficient barriers in the
normal PMD and vPMD.
It showed 7% performance uplift on ThunderX2 and 4% on Arm N1SDP.
The test case is the RFC2544 zero-loss test running testpmd.
[1] http://inbox.dpdk.org/dev/CALBAE1M-ezVWCjqCZDBw+MMDEC4O9qf0Kpn89EMdGDajepKoZQ@mail.gmail.com
Fixes: ae0eb310f2 ("net/i40e: implement vector PMD for ARM")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Setup NIC to generate MSI-X interrupts.
Set the IVAR register to map interrupt causes to vectors.
Implement interrupt enable/disable functions.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Disable CQ_DISABLED error interrupt in NIX_LF_ERR_INT
to fix spurious interrupts in event dev mode. Also skip
configuring RSS when RQ count is '0' because
RSS table initialization is done incorrectly due to
divide-by-zero error and it is leading to RQ_OOR error
in NIX_LF_ERR_INT.
Fixes: 83ce2880e2 ("net/octeontx2: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
In case of bnx2xvf pmd, tx packets can support vland id in 2 ways:
1. Setting the mbuf ol_flags=PKT_TX_VLAN_PKT and passing the
vlanid in mbuf->vlan_tci.
2. The tx packet itself has the vlan id included in the packet.
The first case is working as expected but the second case where
the vlan id is included in thetx packets itself was found not
working as expected. To handle that we need to properly set the
start_bd bitfield and the vlan_or_ethertype instead of setting it
to just the ethertype in case of VF.
Signed-off-by: Souvik Dey <sodey@rbbn.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
Add support the set_mc_addr_list device operation in the bnx2xvf PMD.
The configured addresses are stored in the device private area, so
they can be flushed before adding new ones.
Without this v6 multicast packets were properly forwarded to the
Guest VF.
Signed-off-by: Souvik Dey <sodey@rbbn.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
fgets(3)/fread(3)/fscanf(3) etc. use mmap(2)/munmap(2) which leads
to TLB shutdown interrupts to all DPDK app cores including RX cores.
This can cause packet drops. Use read(2)/write(2) instead.
Bugzilla ID: 440
Cc: stable@dpdk.org
Signed-off-by: Mohsin Shaikh <mohsinshaikh@niometrics.com>
Reviewed-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When creating a flow, usually the creating routine is called in
serial. No parallel execution is supported right now. The same
function will be called only once for a single flow creation.
But there is a special case that the creating routine will be called
nested. If the xmeta feature is enabled and there is FLAG / MARK in
the actions list, some metadata reg copy flow needs to be created
before the original flow is applied to the hardware.
In the flow non-cached mode, resources only for flow creation will
not be saved anymore. The memory space is pre-allocated and reused
for each flow. A global index for each device is used to indicate
the memory address of the resources. If the function is called in a
nested mode, then the index will be reset and make everything get
corrupted.
To solve this, a nested index is introduced to save the position for
the original flow creation. Currently, only one level nested call
of the flow creating routine is supported.
Fixes: e7bfa3596a ("net/mlx5: separate the flow handle resource")
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The variable storages of the same name are merged together
if compiled with -fcommon. This is the default.
This default behaviour allows to declare a variable in a header file and
share the variable in every .o binaries thanks to merge at link-time.
In the case of dlopen linking of the glue library, the pointer mlx4_glue
is referencing the glue functions struct and is set after calling
dlopen.
If compiling with -fno-common (default in GCC 10), the variables must be
declared as extern to avoid multiple re-definitions.
In case the glue layer is split in glue library, the variable mlx4_glue
needs to have its own storage for the rest of the PMD.
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, when upper level application call the API function named
rte_eth_dev_set_vlan_offload to configure the hardware vlan filter
offload and call the rte_eth_promiscuous_enable API to enable
promiscuous mode based on hns3 PF device, driver can't receive the
packets with a vlan tag which has not been added by calling the API
function named rte_eth_dev_vlan_filter.
This patch fixes it by disabling the vlan filter when setting the
promiscuous mode and enabling the vlan filter again after the
promiscuous mode are disabled.
Fixes: 19a3ca4c99 ("net/hns3: add start/stop and configure operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Currently, By default VLAN filter is enabled during initialization and
couldn't be turned off based on hns3 PF device. If upper applications
don't call rte_eth_dev_vlan_filter API function to set vlan based on
hns3 PF device, hns3 PF PMD driver will can't receive the packets with
vlan tag. It will leads to some compatibility issues, the behaviors of
using hns3 network engine and other NICs are different.
This patch disables the VLAN filter during initialization and allows the
upper level applications to enable or disable the VLAN filter.
Fixes: 411d23b9ea ("net/hns3: support VLAN")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
When upper application calls the rte_eth_dev_rss_hash_conf_get API
function to get the RSS key parameters, the function should return the
RSS key length supported by the device. Otherwise, an error will occur
when the upper application needs to use the RSS key length supported
by this specified hardware for judgment and configure the specified
key into hardware.
For example, in the following scenario:
When users want to use their own RSS key, but the length of the key is
bigger than the one of the supported by hardware.
As a result, users need to get the RSS key length supported by hardware
according to the above API firstly, and then compare the actual obtained
RSS key length with the length of their own RSS key.
If the driver does not return the actual value, error may occur when
user calls the rte_eth_dev_rss_hash_update API function to configure
their own key into hardware.
Besides, this patch fixes the problem of stepping on memory when the RSS
key array configured by users are less than the RSS key length supported
by the driver at the same time.
Fixes: c37ca66f2b ("net/hns3: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Currently, when upper application calls rte_eth_dev_info_get API
function to query the Rx offload capability based on hns3 network
engine, RSS hash offload capacity is missing.
This patch fixes it by adding the related capacity in the
'.dev_infos_get' ops implementation function named hns3_dev_infos_get
and hns3vf_dev_infos_get for hns3 PF/VF PMD driver.
Fixes: c37ca66f2b ("net/hns3: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
This patch fixes that the flow director rules are not cleared during
initialization, which lead to remaining flow director rules after upper
application (such as testpmd) restarted.
Fixes: fcba820d9b ("net/hns3: support flow director")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Currently, Rx interrupt cannot work normally after reset (such as FLR,
global reset and IMP reset), when running l3fwd-power application based
on hns3 network engine.
The root cause is that the hardware configuration about Rx interrupt
does not recover after reset.
This patch fixes it with the following modification.
1. The internal static function named hns3(vf)_init_ring_with_vector is
moved from hns3_init_pf to hns3(vf)_init_hardware because
hns3(vf)_init_hardware is called both in the initialization and the
RESET_STAGE_DEV_INIT stage of the reset process.
2. The internal static function named hns3(vf)_restore_rx_interrupt is
added in hns3(vf)_restore_conf, it is used to recover hardware
configuration about interrupt vectors of rx queues in the
RESET_STAGE_DEV_INIT stage of the reset process.
3. The internal static function named hns3_dev_all_rx_queue_intr_enable
and hns3_enable_all_queues are added in hns3(vf)_dev_start(which
called in the initialization, so after calling the rte_eth_dev_start
API successfully, the driver is ready to work.
4. The function named hns3_dev_all_rx_queue_intr_enable and
hns3_enable_all_queues are also added in hns3(vf)_start_service(which
called in the RESET_STAGE_DEV_INIT stage of the reset process), so
after start_service, the driver is ready to work.
Note:
1. Because FLR will clear queue's interrupt enable bit hardware
configuration, so we add calling hns3_dev_all_rx_queue_intr_enable to
enable interrupt before enabling queues.
2. After finished the initialization, we can enable queues to work by
calling the internal function named hns3_enable_all_queues.
Fixes: 02a7b55657 ("net/hns3: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Currently, when upper application calls the rte_eth_dev_mac_addr_add API
function to add a MC mac address based on hns3 PF/VF device, it will
fail.
In hns3 network engine adding UC and MC mac address with different
commands with firmware. We need to determine whether the input address
is a UC or a MC address to call different commands in the
'.mac_addr_add' and '.mac_addr_remove' ops implementation functions in
hns3 PF and VF driver as below:
hns3_add_mac_addr
hns3vf_add_uc_mac_addr
hns3_remove_mac_addr
hns3vf_remove_mac_addr
By the way, it is recommended calling the rte_eth_dev_set_mc_addr_list API
function to set the MC mac address, because using the
rte_eth_dev_mac_addr_add API function to set MC mac address may affect the
specifications of UC mac addresses.
Fixes: 7d7f9f80bb ("net/hns3: support MAC address related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
This patch replaces the specific macro named RTE_INTR_VEC_ZERO_OFFSET
provided by DPDK framework instead of the magic number 0.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Currently, the return value processing of some functions can be combined
and the result is that some codes can be optimized.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Fix comment that is no more correct as the code evolved.
Fixes: 9470427c88 ("net/virtio: do not store PCI device pointer at shared memory")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
If a vhost device is closed before eth_dev_configure is done
to the device, internal resources allocated to the device
would not be freed. This patch fixes it.
Fixes: 3d01b759d2 ("net/vhost: delay driver setup")
Cc: stable@dpdk.org
Signed-off-by: Itsuro Oda <oda@valinux.co.jp>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
With this patch, the promiscuous and multicast fields are initialized as
enabled for vhost PMD by default, this allows the devices to be used
when running applications that attempt to enable promiscuous or
multicast mode.
Similar things have done for other virtual PMDs by commit f165210321
("drivers/net: enable promiscuous and multicast by default")
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Added vHost PMD arguments 'linear-buffer' and 'ext-buffer'
to configure 'RTE_VHOST_USER_LINEARBUF_SUPPORT' and
'RTE_VHOST_USER_EXTBUF_SUPPORT' flags in the vhost library
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Because some apps may pass illegal parameters, driver increases
checks on illegal parameters and DFX statistics, which includes
sge_len0 and mbuf_null txq xstats member.
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
When FW is hotactive which means updating the FW but not needs
to reboot OS, FW returns HINIC_DEV_BUSY_ACTIVE_FW for pf driver
because firmware is being reinitialized, at which point the cmdq
initialization that relies on the fw channel will fail, so driver
should reinit the cmdq when port start.
Fixes: 0194313b2d ("net/hinic/base: fix port start during FW hot update")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
Add an new device argument 'no-rx', which will prevent PMD receiving
packets.
This is useful for testing when a PMD is needed only to send packets to.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Prefer 'unsigned int' storage type keyword against 'unsigned', this also
silence the checkpatch warnings.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
There is no need to check if the argument exist or not,
`rte_kvargs_process` returns success if the argument is not provided at
all.
Fixes: c743e50c47 ("null: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Secondary process uses the primary process device and while setting the
Rx/Tx functions it uses the device arguments from the secondary process
instead of the primary ones.
This may cause primary and secondary process use different Rx/Tx
functions unintentionally.
Fixes: bccc77a6a7 ("net/null: fix multi-process Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update base code release version in readme.
Signed-off-by: Jiaqi Min <jiaqix.min@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Introduce constants for handling PTP pins used for external
clock source.
Signed-off-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com>
Signed-off-by: Jiaqi Min <jiaqix.min@intel.com>
Acked-by: Piotr Kwapulinski <piotr.kwapulinski@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This change is adding new device ID and handling it in the same way as
X710-T*L head of family. A new device ID is for new V710-T*L adapter
supporting speeds up to 5G.
Signed-off-by: Zalfresso-Jundzillo <marekx.zalfresso-jundzillo@intel.com>
Signed-off-by: Jiaqi Min <jiaqix.min@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
update X722/X710 FW API version to 1.10.
Signed-off-by: Piotr Azarewicz <piotr.azarewicz@intel.com>
Signed-off-by: Jiaqi Min <jiaqix.min@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch added iavf_flow_create, iavf_flow_destroy,
iavf_flow_flush and iavf_flow_validate support,
these are used to handle all the generic filters.
This patch supported basic L2, L3, L4 and GTPU patterns.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The 'mac_addrs' freeing has been moved to rte_eth_dev_release_port(),
so freeing 'mac_addrs' like this in pfe_eth_exit() is unnecessary and
will cause double free.
Fixes: 67fc3ff97c ("net/pfe: introduce basic functions")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, the fallback counter is also allocated from the pool, the
specify fallback function code becomes a bit duplicate.
Reorganize the fallback counter code to make it reuse from the normal
counter code.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, the counter struct saves both the members used by batch
counters and none batch counters. The members which are only used
by none batch counters cost 16 bytes extra memory for batch counters.
As normally there will be limited none batch counters, mix the none
batch counter and batch counter members becomes quite expensive for
batch counter. If 1 million batch counters are created, it means 16 MB
memory which will not be used by the batch counters are allocated.
Split the mlx5_flow_counter struct for batch and none batch counters
helps save the memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Currently, DV and verbs counters are both changed to indexed. It means
while creating the flow with counter, flow can save the indexed value to
address the counter.
Save the 4 bytes indexed value in the rte_flow instead of 8 bytes
pointer helps to save memory with millions of flows.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This part of the counter optimize change the DV counter to indexed as
what have already done in verbs. In this case, all the mlx5 flow counter
can be addressed by index.
The counter index is composed of pool index and the counter offset in
the pool counter array. The batch and none batch counter dcs ID offset
0x800000 is used to avoid the mix up for the index. As batch counter dcs
ID starts from 0x800000 and none batch counter dcs starts from 0, the
0x800000 offset is added to the batch counter index to indicate the
index of batch counter.
The counter pointer in rte_flow struct will be aligned to index instead
of pointer. It will save 4 bytes memory for every rte_flow. With
millions of rte_flow, it will save MBytes memory.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This is part of the counter optimize which will save the indexed counter
id instead of the counter pointer in the rte_flow.
Place the verbs counter into the container pool helps the counter to be
indexed correctly independent with the raw counter.
The counter pointer in rte_flow will be changed to indexed value after
the DV counter is also changed to indexed.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.
Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As none-batch counter pool allocates only one counter every time, after
the new allocated counter pop out, the pool will be empty and moved to
the end of the container list in the container.
Currently, the new non-batch counter allocation maybe happened with new
counter pool allocated, it means the new counter comes from a new pool.
While new pool is allocated, the container resize and switch happens.
In this case, after the pool becomes empty, it should be added to the
new container pool list as the pool belongs.
Update the container pointer accordingly with pool allocation to avoid
add the pool to the incorrect container.
Fixes: 5382d28c21 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The v2.1.0 is refactoring Tx and Rx paths, including few bug fixes and
is also adding a new features which are going to be available with the
newest hardware.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Some ENA devices can pass to the driver descriptor with length 0. To
avoid extra allocation, the descriptor can be reused by simply putting
it back to the device.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
The original Tx function was very long and was containing both cleanup
and the sending sections. Because of that it was having a lot of local
variables, big indentation and was hard to read.
This function was split into 2 sections:
* Sending - which is responsible for preparing the mbuf, mapping it
to the device descriptors and finally, sending packet to the HW
* Cleanup - which is releasing packets sent by the HW. Loop which was
releasing packets was reworked a bit, to make intention more visible
and aligned with other parts of the driver.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
To improve code readability, abstraction was added for operating on IO
rings indexes.
Driver was defining local variable for ring mask in each function that
needed to operate on the ring indexes. Now it is being stored in the
ring as this value won't change unless size of the ring will change and
macros for advancing indexes using the mask has been added.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Divider used for both Tx and Rx cleanup/refill threshold can cause too
big delay in case of the really big rings - for example if the 8k Rx
ring will be used, the refill won't trigger unless 1024 threshold will
be reached. It will also cause driver to try to allocate that much
descriptors.
Limiting it by fixed value - 256 in that case, would limit maximum
time spent in repopulate function.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
ena_com API should be preferred for getting number of used/available
descriptors unless extra calculation needs to be performed.
Some helper variables were added for storing values that are later
reused. Moreover, for limiting the value of sent/received packets to
the number of available descriptors, the RTE_MIN is used instead of
if function, which was doing similar thing but was less descriptive.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
* Split main Rx function into multiple ones - the body of the main
was very big and further there were 2 nested loops, which were
making the code hard to read
* Rework how the Rx mbuf chains are being created - Instead of having
while loop which has conditional check if it's first segment, handle
this segment outside the loop and if more fragments are existing,
process them inside.
* Initialize Rx mbuf using simple function - it's the common thing for
the 1st and next segments.
* Create structure for Rx buffer to align it with Tx path, other ENA
drivers and to make the variable name more descriptive - on DPDK, Rx
buffer must hold only mbuf, so initially array of mbufs was used as
the buffers. However, it was misleading, as it was named
"rx_buffer_info". To make it more clear, the structure holding mbuf
pointer was added and now there is possibility to expand it in the
future without reworking the driver.
* Remove redundant variables and conditional checks.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>