Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds queue and queue group priority configuration
support. The highest priority is 0, and the lowest priority
is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues and queue group. To set up the exact queue
group, we need to reconfigure topology by delete and then recreate
queue nodes. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Fixes: 93e84b1bfc ("net/ice/base: add basic Tx scheduler")
Cc: stable@dpdk.org
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
1ca05831b9 added a check that SDP3 (used as a TX_DISABLE output to the
SFP cage on these cards) is not asserted to avoid incorrectly reporting
link up when the SFP's laser is turned off.
ff8162cb95 limited this workaround to fiber ports
This patch:
* Adds devarg 'fiber_sdp3_no_tx_disable' not all fiber ixgbe devs use
SDP3 as TX_DISABLE
Fixes: 1ca05831b9 ("net/ixgbe: fix link status")
Fixes: ff8162cb95 ("net/ixgbe: fix link status")
Cc: stable@dpdk.org
Signed-off-by: Jeff Daly <jeffd@silicom-usa.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The new VLAN virtchnl opcodes introduce new capabilities like VLAN
filtering, stripping and insertion.
The DCF needs to query the VLAN capabilities based on current device
configuration firstly.
DCF is able to configure inner VLAN filter when port VLAN is enabled
base on negotiation; and DCF is able to configure outer VLAN (0x8100)
if port VLAN is disabled to be compatible with legacy mode.
When port VLAN is updated by DCF, the DCF needs to reset to query the
new VLAN capabilities.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
To support the new DDP and be compatible with the old version DDP
file, API function 'check_ddp_support_proto_id' is added to detect
if the required protocol ID is supported by the current DDP file.
Add new protocol ID IPV6_NEXT_PROTO support for PF FDIR if current
DDP is new DDP and keep behavior if it is the old version DDP.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If PF driver don't support a flex Rx descriptor that required by VF,
legacy descriptor format will be negotiated to configure the hardware
queue.
The patch fixes the issue that an Rx data path that handle flexible
descriptor (e.g.:
iavf_recv_scattered_pkts_vec_avx512_flex_rxd) is selected while the
actual hardware queues are configured as legacy due to above scenario,
which will cause following coredump.
Fixes: 12b435bf8f ("net/iavf: support flex desc metadata extraction")
Cc: stable@dpdk.org
Signed-off-by: Yiding Zhou <yidingx.zhou@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Set an invalid quanta size from devargs will cause memory leak and this
is reported by coverity.
The patch fix the issue by correcting the error handle.
Coverity issue: 378017
Fixes: b14e8a57b9 ("net/iavf: support quanta size configuration")
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Since commit 2655926aea9b (net: Remove netif_rx_any_context() and
netif_rx_ni().) in 5.18, netif_rx_ni() no longer exists as netif_rx()
can be called from any context. So define HAVE_NETIF_RX_NI for older
releases and call the appropriate function in kni_net.
netif_rx_ni() must be used on older kernel since netif_rx() might
might lead to deadlocks or other problems there.
Cc: stable@dpdk.org
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
This patch encompasses a few fixes carried by a previous patch
that aimed to support bonding device stats counting.
- If mlx5_os_read_dev_stat fails, it returns 1 instead of a
negative value, causing mlx5_xstats_get to return an invalid
number of counters. Since this error is not blocking, do not
mess ret value with mlx5_os_read_dev_stat returned value.
This allows avoiding the very annoying log:
"n_xstats != n_xstats_names => skipping"
- Invert the check for mlx5_os_read_dev_stat(), currently leading
us to store the result if the function failed, and use a
backup value if it succeeded, which is the opposite of what we
actually want. Revert to the original (correct) test.
- Add missing test on _mlx5_os_read_dev_counters() to prevent
using trash stats values.
Fixes: 7ed15acdcd ("net/mlx5: improve xstats of bonding port")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Signed-off-by: Geoffrey Le Gourriérec <geoffrey.le_gourrierec@6wind.com>
Tested-by: Bassam Zaid AlKilani <bzalkilani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add two kinds of Rx drop counters to DPDK xstats which are
physical port scope.
1. rx_prio[0-7]_buf_discard
The number of unicast packets dropped due to lack of shared
buffer resources.
2. rx_prio[0-7]_cong_discard
The number of packets that is dropped by the Weighted Random
Early Detection (WRED) function.
Prio[0-7] is determined by VLAN PCP value which is 0 by default.
Both counters are retrieved from kernel ethtool API which calls
PRM command finally.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When an error occurs in Tx, and it is moved to ERROR state, it
is not recoverable, during recovery it's state cannot be modified
to INIT. to modify state from RESET to INIT, the port must be
passed in modify attributes, and in case of ERROR to READY
modification path, it was not provided.
Provide port number when changing state from RESET to INIT.
Fixes: 3a87b964ed ("net/mlx5: create Tx queues with DevX")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Yellow meter action support is added in meter hierarchy validation.
If one color uses meter action, the other can only use NULL action
or the same meter action. And only shared meter is supported.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When a hierarchy meter is shared by other ports, it's needed to iterate
all meter policies in hierarchy to create tag rules, to set packet with
next meter ID, which will be used by related meter drop count.
This patch adds the tag rule for yellow support in hierarchy, so both
green/yellow policy flows can set the correct meter ID.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds the support of meter action for yellow meter policy
flow, so can use meter action for both green and yellow policy flows
in meter hierarchy.
Currently must use the same meter within one meter policy. Packets
passing green/yellow policy flow will have previous meter color of
green/yellow in subsequent meter.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds the support for previous color aware for meter.
Start_color setting is set to UNDEFINED when creating meter object that
is color aware.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If the VMDq limits is 0, a divide-by-zero error occurs.
This patch replaces throwing a floating point exception with
a normal error message.
Fixes: d19533e86f ("examples/vhost: copy old vhost example")
Cc: stable@dpdk.org
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Register address is different between net and blk device.
We are re-using most of the code, when register address is
different, we have to check net and blk device go through
different code.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Create a thread to poll and relay config space change interrupt.
Use VHOST_USER_SLAVE_CONFIG_CHANGE_MSG to inform QEMU.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Add some log of virtio blk device config space information
at vDPA launch before QEMU connects.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Vhost backend of different devices have different features.
Add an API to get vDPA device type, net device or blk device
currently, so users can set different features for different
kinds of devices.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Add SW live-migration support to block device.
For block device, it is critical that no packet
should be dropped. So when virtio blk device is
paused, make sure hardware last_avail_idx and
last_used_idx are the same. This indicates all
requests have received acks, and no inflight IO.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
For the net device type, only interrupt of rxq needed to be relayed.
But for block, since all the queues are used for both read and write
requests. Interrupt of all queues needed to be relayed.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
For virtio blk device, re-use part of ifc driver ops.
Implement ifcvf_blk_get_config for virtio blk device.
Support VHOST_USER_PROTOCOL_F_CONFIG feature for virtio
blk device.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Add support for VHOST_USER_GET_CONFIG and VHOST_USER_SET_CONFIG.
VHOST_USER_GET_CONFIG and VHOST_USER_SET_CONFIG message is only
supported by virtio blk VDPA device.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Get_config and set_config are necessary ops for blk device.
Add get_config and set_config ops to vDPA ops.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Re-use the vdpa/ifc code, distinguish blk and net device by pci_device_id.
Blk and net device are implemented with proper feature and ops.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
In vhost_user_msg_handler(), if vhost message handling
failed, we should check whether the queue is locked and
release the lock before returning. Or, it will cause a
deadlock later.
Fixes: 7f31d4ea05 ("vhost: fix lock on device readiness notification")
Cc: stable@dpdk.org
Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
This patch adds the use case for async dequeue API. Vswitch can
leverage DMA device to accelerate vhost async dequeue path.
Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch implements asynchronous dequeue data path for vhost split
ring, a new API rte_vhost_async_try_dequeue_burst() is introduced.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch refactors copy_desc_to_mbuf() used by the sync
path to support both sync and async descriptor to mbuf filling.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch refactors vhost async enqueue path and dequeue path to use
the same function async_fill_seg() for preparing batch elements,
which simplifies the code without performance degradation.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch extracts the descriptors to buffers filling from
copy_desc_to_mbuf() into a dedicated function. Besides, enqueue
and dequeue path are refactored to use the same function
sync_fill_seg() for preparing batch elements, which simplifies
the code without performance degradation.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds runtime checks in unsafe Vhost async APIs,
to ensure the access lock is taken.
The detection won't work every time, as another thread
could take the lock, but it would help to detect misuse
of these unsafe API.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch adds statistics for packets in-flight submission
and completion, when Vhost async mode is used.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch adds statistics for IOTLB hits and misses.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch adds a new virtqueue statistic for guest
notifications. It is useful to deduce from hypervisor side
whether the corresponding guest Virtio device is using
Kernel Virtio-net driver or DPDK Virtio PMD.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Now that we have Vhost statistics APIs, this patch replaces
Vhost PMD extended statistics implementation with calls
to the new API. It will enable getting more statistics for
counters that cannot be implemented at the PMD level.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch introduces new APIs for the application
to query and reset per-virtqueue statistics. The
patch also introduces generic counters.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Since QEMU 5.2.0 fixes the vhost split ring multi-queue reconnection
issue in commit f66337bdbfda ("vhost-user: save features of multiqueues
if chardev is closed"), this patch updates known issue to indicate
the range of affeacted QEMU versions.
Fixes: b37e95507e ("doc: add vhost multi-queue reconnection issue")
Cc: stable@dpdk.org
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
vq->async accesses must be protected with vq->access_lock.
Fixes: eb666d2408 ("vhost: fix async unregister deadlock")
Fixes: 0c0935c5f7 ("vhost: allow to check in-flight packets for async vhost")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sunil Pai G <sunil.pai.g@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Removed some macros that were defined but not used in this driver.
As a result, rte_smp_xx occurrence is removed from this driver.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Kathleen Capella <kathleen.capella@arm.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>