Currently, can't create more than one following flow for
ETH + VLAN pattern.
1. flow create 0 ingress pattern eth / vlan vid is 350 / end
actions queue index 2 / end
2. flow create 0 ingress pattern eth / vlan vid is 351 / end
actions queue index 3 / end
The root cause is the vlan_tci is not set correctly, it will
cause the keys of both of the two flows are the same.
Fixes: 42044b69c6 ("net/i40e: support input set selection for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Supports check the status of Rx and Tx descriptors.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The buffer split feature is mentioned in the mlx5 PMD
documentation, the limitation is description is added
as well.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add rte_eth_dev_info->rx_seg_capa parameters:
- receiving to multiple pools is supported
- buffer offsets are supported
- no offset alignment requirement
- reports the maximal number of segments
- reports the buffer split offload flag
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Only the regular rx_burst routine is updated to support split,
because the vectorized ones does not support scatter and MPRQ
does not support split at all.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The split feature for receiving packets was added to the mlx5
PMD, now Rx queue can receive the data to the buffers belonging
to the different pools and the memory of all the involved pool
must be registered for DMA operations in order to allow hardware
to store the data.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The scatter-gather elements should be configured
accordingly to support the buffer split feature.
The application provides the desired settings for
the segments at the beginning of the packets and
PMD pads the buffer chain (if needed) with attributes
of last specified segment to accommodate the packet
of maximal length.
There are some limitations are implied. The MPRQ
feature should be disengaged if split is requested,
due to MPRQ neither supports pushing data to the
dedicated pools nor follows the flexible buffer sizes.
The vectorized rx_burst routines does not support
the scattering (these ones are extremely simplified
and work over the single segment only) and can't
handle split as well.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The routine to provide Rx queue setup with specifying
extended receiving buffer description is added.
It allows application to specify desired segment
lengths, data position offsets in the buffer
and dedicated memory pool for each segment.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
- apply matches to both outer and inner packet headers
during entire offload procedure;
- restore outer header of partially offloaded packet;
- model is implemented as a set of helper functions.
Implementation details:
* tunnel_offload PMD parameter must be set to 1 to enable the feature.
* application cannot use MARK and META flow actions with tunnel.
* offload JUMP action is restricted to steering tunnel rule only.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement shared action create/destroy/update/query. The current
implementation support is limited to shared RSS action only. The shared
RSS action create operation prepares hash RX queue objects for all
supported permutations of the hash. The shared RSS action update
operation relies on functionality to modify hash RX queue introduced in
one of the previous commits in this patch series.
Implement RSS shared action and handle shared RSS on flow apply and
release. The lookup for hash RX queue object for RSS action is limited
to the set of objects stored in the shared action itself and when
handling shared RSS action. The lookup for hash RX queue object inside
shared action is performed by hash only.
Current implementation limited to DV flow driver operations i.e. verbs
flow driver operations doesn't support shared action.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Handle shared action on flow validation/creation/destruction.
mlx5 PMD translates shared action into a regular one before handling
flow validation/creation. The shared action translation applied to
utilize the same execution path for both shared and regular actions.
The current implementation supports shared action translation for shared
RSS action only.
RSS action validation split to validate shared RSS action on its
creation in addition to action validation in flow validation/creation
path.
Implement rte_flow shared action API for mlx5 PMD, mostly forwarding
calls to flow driver operations (see struct mlx5_flow_driver_ops).
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement TIR modification (see mlx5_devx_cmd_modify_tir()) using DevX
API. TIR is the object containing the hashed table of Rx queue. The
functionality to configure/modify this HW-related object is prerequisite
to implement rete_flow_shared_action_update() for shared RSS action in
mlx5 PMD. HW-related structures for TIR modification add in mlx5_prm.h.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mark "BSD nic_uio", "Usage doc", and "Perf doc" as supported
for the bnxt PMD.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Make SIMD initialization code less verbose by using appropriate
intrinsics when all lanes of a vector are initialized to the
same value.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Just like iavf, setting the value to 2us allows for generally good
streaming packet performance while keeping latency down, and
generally keeps the performance of the PF and VF interfaces similar.
The previous value of 0x10 was making latency on a single packet
receive be as much as 16us.
Fixes: 65dfc889d8 ("net/ice: support Rx queue interruption")
Cc: stable@dpdk.org
Reported-by: Brian Johnson <brian.johnson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The iavf driver was trying to use writeback on ITR, but was
never setting an ITR, so it didn't work. This caused performance
to be limited due to too much PCIe traffic and partial writes
during most benchmarking workloads.
Set the ITR during queue setup, which can be checked at runtime
by reading register 0x2800. Setting the value to 2us allows
for generally good streaming packet performance while keeping
latency down.
Fixes: d6bde6b5ea ("net/avf: enable Rx interrupt")
Cc: stable@dpdk.org
Reported-by: Brian Johnson <brian.johnson@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Current hardware protection is based on pthread mutex which
work just for situation of multi-thread in one process. In
multi-process environment, hardware state machine would be
corrupted by concurrent access, that means original pthread
mutex mechanism need be enhanced.
The major modifications in this patch are list below:
1. Create a mutex for adapter in shared memory named
"mutex.IFPGA:domain:bus:dev.func" when device is probed.
2. Create a shared memory named "IFPGA:domain:bus:dev.func" during opae
adapter is initializing. There is a reference count in shared memory.
Shared memory will be destroyed once reference count turned to zero.
3. Two mutexs are created in shared memory and initialized with flag
PTHREAD_PROCESS_SHARED. One for SPI and the other for I2C. They will
be passed to SPI and I2C driver subsequently.
4. DTB data in flash will be cached in shared memory. Then MAX10 driver
can read DTB from shared memory instead of flash. This avoid
confliction of concurrent flash access between hardware and software.
Signed-off-by: Wei Huang <wei.huang@intel.com>
Signed-off-by: Tianfei Zhang <tianfei.zhang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Add two functions to complete the resource free work, one is
'ifpga_adapter_destroy()', the other is 'ifpga_bus_uinit()'.
Then call 'opae_adapter_destroy()' and 'opae_adapter_data_free()'
in 'ifpga_rawdev_close()' to free resources.
Also 'opae_adapter_free()' is removed from 'ifpga_rawdev_destroy()',
because opae adapter is pointed by dev_private member in raw_dev,
it will be freed in 'rte_rawdev_pmd_release()'.
Signed-off-by: Wei Huang <wei.huang@intel.com>
Signed-off-by: Tianfei Zhang <tianfei.zhang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Since 'rte_intr_callback_unregister()' can return positive
value as success, but 'ifpga_rawdev_destroy()' handle it as
an error.
Instead, only negative return is treated as failure.
Fixes: e0a1aafe2a ("raw/ifpga: introduce IRQ functions")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Signed-off-by: Tianfei Zhang <tianfei.zhang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Interrupt handler copied to the local 'intr_handle' variable by value
before passing it to IRQ functions.
This leads IRQ functions update the local variable instead of
'ifpga_irq_handle'.
Instead, using 'intr_handle' local variable as pointer to
'ifpga_irq_handle' as intended.
Fixes: e0a1aafe2a ("raw/ifpga: introduce IRQ functions")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Signed-off-by: Tianfei Zhang <tianfei.zhang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Add RSS configure, support to RSS hash and reta operations for PF.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add device stop, close and reset operations.
And support hardware thermal sensor.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>