Currently, the list memory was allocated by the list API caller.
Move it to be allocated by the create API in order to save consistence
with the hlist utility.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
The atomic operation in the list utility no need a barriers because the
critical part are managed by RW lock.
Relax them.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
When a cache entry is allocated by lcore A and is released by lcore B,
the driver should synchronize the cache list access of lcore A.
The design decision is to manage a counter per lcore cache that will be
increased atomically when the non-original lcore decreases the reference
counter of cache entry to 0.
In list register operation, before the running lcore starts a lookup in
its cache, it will check the counter in order to free invalid entries in
its cache.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
The mlx5 internal list utility is thread safe.
In order to synchronize list access between the threads, a RW lock is
taken for the critical sections.
The create\remove\clone\clone_free operations are in the critical
sections.
These operations are heavy and make the critical sections heavy because
they are used for memory and other resources allocations\deallocations.
Moved out the operations from the critical sections and use generation
counter in order to detect parallel allocations.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
When mlx5 list object is accessed by multiple cores, the list lock
counter is all the time written by all the cores what increases cache
misses in the memory caches.
In addition, when one thread accesses the list for add\remove\lookup
operation, all the other threads coming to do an operation in the list
are stuck in the lock.
Add per lcore cache to allow thread manipulations to be lockless when
the list objects are mostly reused.
Synchronization with atomic operations should be done in order to
allow threads to unregister an entry from other thread cache.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
The internal mlx5 list tool is used mainly when the list objects need to
be synchronized between multiple threads.
The "cache" term is used in the internal mlx5 list API.
Next enhancements on this tool will use the "cache" term for per thread
cache management.
To prevent confusing, remove the current "cache" term from the API's
names.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
Define the types of the modify header action fields to be with the
minimum size needed for the optional values range.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
The flow list is used to save the create flows and to be used only
when port closes all the flows need to be flushed.
This commit takes advantage of the index pool foreach operation to
flush all the allocated flows.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit supports the index pool non-lcore operations with
an extra cache and lcore lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In some cases, application may want to know all the allocated
index in order to apply some operations to the allocated index.
This commit adds the indexed pool functions to support foreach
operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
For object which wants efficient index allocate and free, local
cache will be very helpful.
Two level cache is introduced to allocate and free the index more
efficient. One as local and the other as global. The global cache
is able to save all the allocated index. That means all the allocated
index will not be freed. Once the local cache is full, the extra
index will be flushed to the global cache. Once local cache is empty,
first try to fetch more index from global, if global is still empty,
allocate new trunk with more index.
This commit adds new local cache mechanism for indexed pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Some ipool instances in the driver are used as ID\index allocator and
added other logic in order to work with limited index values.
Add a new configuration for ipool specify the maximum index value.
The ipool will ensure that no index bigger than the maximum value is
provided.
Use this configuration in ID allocator cases instead of the current
logics. This patch add the maximum ID configurable for the index pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
MR btree len is a constant during Rx replenish.
Moved retrieve of the value out of loop to reduce data loads.
Slight performance uplift was measured on both N1SDP and x86.
Suggested-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mask of entries after the compressed CQE is covered by invalid mask of
non-compressed valid CQEs. Hence remove redundant calculation on mask.
The change showed slight performance uplift on N1SDP.
Fixes: 570acdb1da8a ("net/mlx5: add vectorized Rx/Tx burst for ARM")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
For the newly attached ports (with "port attach" command) the
default offloads settings, configured from application command
line, were not applied, causing port start failure following
the attach.
For example, if scattering offload was configured in command
line and rxpkts was configured for multiple segments, the newly
attached port start was failed due to missing scattering offload
enable in the new port settings. The missing code to apply
the offloads to the new device and its queues is added.
The new local routine init_config_port_offloads() is introduced,
embracing the shared part of port offloads initialization code.
Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
MAC PAUSE can take effect on a single TC or multiple TCs, depending on the
hardware. For example, the Kunpeng 920 supports MAC pause in a single TC,
and the Kunpeng 930 supports MAC pause in multiple TCs. This patch
supports MAC PAUSE in multiple TC for some hardware.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Since the HW limitation for VF, the VLAN filter is default enabled, and
is not allowed to be closed. Now, the limitation has been removed in
Kunpeng930 network engine, so this patch add support for VF to modify the
VLAN filter state.
A capabilities bit is added to differentiate between different platforms
and achieve compatibility. When the VF runs on an incomatible platform or
an incompatible kernel-mode driver version is used, the VF behavior is
the same as that before.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
There are some features of VF depend on PF, so it's necessary for VF
to know whether current PF supports. Therefore, the final capability
set of VF will be composed of the capability set of hardware and the
capability set of PF.
For compatibility reasons, the mailbox HNS3_MBX_GET_TCINFO has been
modified to obatin more basic information about the current PF, including
the communication interface version and current PF capabilities set.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In function softnic_conn_init(), a block of memory is allocated as
connection buffer, but it is never freed in softnic_conn_free(),
which cause memory leak.
Fixes: 7709a63bf178 ("net/softnic: add connection agent")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for MSI-X interrupt vectors to the vmxnet3 driver.
This will allow more efficient deployments in cloud environments.
By default it will try to allocate 1 vector (0) for link
event and one MSI-X vector for each Rx queue. To simplify
things, it will only be enabled if the number of Tx and Rx
queues are equal (so that Tx/Rx share the same vector).
If for any reason vmxnet3 cannot enable intr mode, it will
fall back to the LSC only mode.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Signed-off-by: Jochen Behrens <jbehrens@vmware.com>
Return value from bond_ethdev_8023ad_flow_set() is now checked
and appropriate message is logged on error.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Return value is now saved to errval and log message on error reports
correct function name, doesn't use q_id which was out of context,
and uses up-to-date errval.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: Martin Havlik <xhavli56@stud.fit.vutbr.cz>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Setup MSI-X interrupt, complete PHY configuration and set device link
speed to start device. Disable interrupt, stop hardware and clear queues
to stop device.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Add basic init and uninit function.
Map device IDs and subsystem IDs to single ID for easy operation.
Then initialize the shared code.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
In its current state, the API can overflow the user-passed buffer if a new
representor range appears between function calls.
In order to solve this problem, augment the representor info structure with
the numbers of allocated and initialized ranges. This way the users of this
structure can be sure they will not overrun the buffer.
Fixes: 85e1588ca72f ("ethdev: add API to get representor info")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
This is a normal case that the primary process already
owned one device while the secondary process try to
attach it, so suppress the error log here to exclude
this case.
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
The PMD Power Management scheme currently has 3 modes,
scale, monitor and pause. However, it would be nice to
have a baseline mode for easy comparison of power savings
with and without these modes.
This patch adds a 'baseline' mode were the PMD power
management is not enabled. Use --pmd-mgmt=baseline.
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add example for FIB with VRF and ECMP support.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
Add support for the Longest Prefix Match (LPM) lookup to the SWX
pipeline.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>