NVIDIA acquired Mellanox Technologies in 2020.
The DPDK documentation and code might still include instances
of or references to Mellanox trademarks (like BlueField and ConnectX)
that are now NVIDIA trademarks.
The PCI IDs and copyrights are unchanged.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gal Cohen <galco@nvidia.com>
This updates the doc to include new supported devices like ConnectX-7,
and updates the description of older ones.
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The LM process includes a lot of objects creations and
destructions in the source and the destination servers.
As much as LM time increases, the packet drop of the VM increases.
To improve LM time need to parallel the configurations for mlx5 FW.
Add internal multi-thread management in the driver for it.
A new devarg defines the number of threads and their CPU.
The management is shared between all the devices of the driver.
Since the event_core also affects the datapath events thread,
reduce the priority of the datapath event thread to
allow fast configuration of the devices doing the LM.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The motivation of this change is to reduce vDPA device queue creation
time by creating some queue resource in vDPA device probe stage.
In VM live migration scenario, this can reduce 0.8ms for each queue
creation, thus reduce LM network downtime.
To create queue resource(umem/counter) in advance, we need to know
virtio queue depth and max number of queue VM will use.
Introduce two new devargs: queues(max queue pair number) and queue_size
(queue depth). Two args must be both provided, if only one argument
provided, the argument will be ignored and no pre-creation.
The queues and queue_size must also be identical to vhost configuration
driver later receive. Otherwise either the pre-create resource is wasted
or missing or the resource need destroy and recreate(in case queue_size
mismatch).
Pre-create umem/counter will keep alive until vDPA device removal.
Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In order to speed-up the device suspend and resume, make the statistics
counters persistent in reconfiguration until the device gets removed.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Adds new documentation for MLX5 common driver that contains:
- Its features list (doesn't exist for now).
- Its devargs description.
- Device configuration information and tutorial.
- Quick Start Guide for Mellanox OFED/EN.
Move into this doc all shared information from other MLX5 PMD docs and
add them reference to new common doc.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The doc's contain references to pmd but the proper use is to use PMD.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for unicast and broadcast MAC filter configuration.
Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add new vDPA PMD to support vDPA operations of Xilinx devices.
This patch implements probe and remove functions.
Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
RoCE disabling requirement is based on PCI address.
In order to support Sub-Function, a conversion is needed
in the case of an auxiliary device.
SF device can be probed with such devargs string:
auxiliary:mlx5_core.sf.<id>,class=vdpa
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The driver should notify the guest for each traffic burst detected by CQ
polling.
The CQ polling trigger is defined by `event_mode` device argument,
either by busy polling on all the CQs or by blocked call to HW
completion event using DevX channel.
Also, the polling event modes can move to blocked call when the
traffic rate is low.
The current blocked call uses the EAL interrupt API suffering a lot
of overhead in the API management and serve all the drivers and
libraries using only single thread.
Use blocking FD of the DevX channel in order to do blocked call
directly by the DevX channel FD mechanism.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The networking drivers features matrix had rows to show
OS and kernel modules support:
- BSD nic_uio
- Linux UIO
- Linux VFIO
- Other kdrv
- Windows
The kernel modules details are removed to keep only OS support:
- FreeBSD
- Linux
- Windows
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The next parameters control the HW queue moderation feature.
This feature helps to control the traffic performance and latency
trade-off.
Each packet completion report from HW to SW requires CQ processing by SW
and triggers interrupt for the guest driver. Interrupt report and
handling cost CPU cycles and time and the amount of this affects
directly on packet performance and latency.
hw_latency_mode parameters [int]
0, HW default.
1, Latency is counted from the first packet completion report.
2, Latency is counted from the last packet completion.
hw_max_latency_us parameters [int]
0 - 4095, The maximum time in microseconds that packet completion
report can be delayed.
hw_max_pending_comp parameter [int]
0 - 65535, The maximum number of pending packets completions in an HW
queue.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
For better performance and latency, this patch sets default event
handling mode to polling mode which uses dedicate thread per device to
poll and process event.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds new device argument to specify cpu core affinity to
event polling thread for better latency and throughput. The thread
could be also located by name "vDPA-mlx5-<id>".
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
To improve throughput and latency, this patch allows Rx polling timer
delay to 0us.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When hardware error happens, vdpa didn't get such information and leave
driver in silent: working state but no response.
This patch subscribes firmware virtq error event and try to recover max
3 times in 3 seconds, stop virtq if max retry number reached.
When error happens, PMD log in warning level. If failed to recover,
outputs error log. Query virtq statistics to get error counters report.
Acked-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Since the built driver filenames have changed in DPDK 20.11, we need to
update the driver doc to match.
Most drivers start their section with the driver filename highlighted in
bold, while a number were missing the highlight. When updating the names,
add the markers for bold text to any missing it, so as to have things more
consistent.
Fixes: a20b2c01a7 ("build: standardize component names and defines")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Run a simple script to remove trailing white space and blank
lines at end of file across all documents.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Update the docs, adding MCX621102AN-ADAT to the list of NICs supported
by MLX5 vDPA driver.
Suggested-by: William Tu <u9012063@gmail.com>
Signed-off-by: Sergey Madaminov <sergey.madaminov@gmail.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The CQ polling is necessary in order to manage guest notifications when
the guest doesn't work with poll mode (callfd != -1).
The CQ polling scheduling method can affect the host CPU utilization and
the traffic bandwidth.
Define 3 modes to control the CQ polling scheduling:
1. A timer thread which automatically adjusts its delays to the coming
traffic rate.
2. A timer thread with fixed delay time.
3. Interrupts: Each CQE burst arms the CQ in order to get an interrupt
event in the next traffic burst.
When traffic becomes off, mode 3 is taken automatically.
The interrupt management takes a lot of CPU cycles but forward traffic
event to the guest very fast.
Timer thread save the interrupt overhead but may add delay for the guest
notification.
Add device arguments to control on the mode.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The guest virtio device may request MTU updating when the vhost backend
device exposes a capability to support it.
Expose the MTU feature capability.
At configuration time, check the requested MTU and update it in the HW
device.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add support for statistics operations.
A DevX counter object is allocated per virtq in order to
manage the virtq statistics.
The counter object is allocated before the virtq creation
and destroyed after it, so the statistics are valid only in
the life time of the virtq.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The vDPA device offloads all the datapath of the vhost
device to the HW device.
In order to expose to the user traffic information this
patch introduces new 3 APIs to get traffic statistics, the
device statistics name and to reset the statistics per
virtio queue.
The statistics are taken directly from the vDPA driver
managing the HW device and can be different for each vendor
driver.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The devices of the family ConnectX may have two letters as suffix.
Such suffix is preceded with a space and the second x is lowercase:
- ConnectX-4 Lx
- ConnectX-5 Ex
- ConnectX-6 Dx
Uppercase of the device family name BlueField is also fixed.
The lists of supported devices are fixed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Add support for live migration feature by the HW:
Create a single Mkey that maps the memory address space of the
VHOST live migration log file.
Modify VIRTIO_NET_Q object and provide vhost_log_page,
dirty_bitmap_mkey, dirty_bitmap_size, dirty_bitmap_addr
and dirty_bitmap_dump_enable.
Modify VIRTIO_NET_Q object and move state to SUSPEND.
Query VIRTIO_NET_Q and get hw_available_idx and hw_used_idx.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add support for the next features in virtq configuration:
VIRTIO_F_RING_PACKED,
VIRTIO_NET_F_HOST_TSO4,
VIRTIO_NET_F_HOST_TSO6,
VIRTIO_NET_F_CSUM,
VIRTIO_NET_F_GUEST_CSUM,
VIRTIO_F_VERSION_1,
These features support depends in the DevX capabilities reported by the
device.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add support for get_features and get_protocol_features operations.
Part of the features are reported by the DevX capabilities.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add a new driver to support vDPA operations by Mellanox devices.
The first Mellanox devices which support vDPA operations are
ConnectX-6 Dx and Bluefield1 HCA for their PF ports and VF ports.
This driver is depending on rdma-core like the mlx5 PMD, also it is
going to use mlx5 DevX to create HW objects directly by the FW.
Hence, the common/mlx5 library is linked to the mlx5_vdpa driver.
This driver will not be compiled by default due to the above
dependencies.
Register a new log type for this driver.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
A new vDPA class was recently introduced.
IFC driver implements the vDPA operations,
hence it should be moved to the vDPA class.
Move it.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add vDPA devices features table and explanation.
Any vDPA driver can add its own supported features by ading a new ini
file to the features directory in doc/guides/vdpadevs/features.
Signed-off-by: Matan Azrad <matan@mellanox.com>
The vDPA (vhost data path acceleration) drivers provide support for
the vDPA operations introduced by the rte_vhost library.
Any driver which provides the vDPA operations should be moved\added to
the vdpa class under drivers/vdpa/.
Create the general files for vDPA class in drivers and in documentation.
The management tree for vDPA drivers is
git://dpdk.org/next/dpdk-next-virtio.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>