Skip tests which are not yet supported for Windows:
- The libraries that tests depend on are not enabled on Windows yet
- The tests can compile but with issue still under investigation
* test_func_reentrancy:
Windows EAL has no protection against repeated calls.
* test_lcores:
Execution enters an infinite loops, requires investigation.
* test_rcu_qsbr_perf:
Execution hangs on Windows, requires investigation.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Add prefix to resolve name collision on Windows.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Remove two alarm_autotest test cases which do bogus range check
on Windows.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
On Windows, strerror returns just "Unknown error" for errnum greater
than MAX_ERRNO, while linux and freebsd returns "Unknown error <num>",
which is the current expectation for errno_autotest. Differentiate
the error string on Windows to remove a "duplicate error code" failure.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
DPDK logs_autotest on Windows failed at "dynamic log types" tests.
The failures are on 2 test cases for rte_log_set_level_regexp API,
due to regular expression is not supported on Windows in DPDK yet
and regcomp/regexec are just stubs on Windows (in regex.h).
In app/test/test_logs.c, ifndef these two test cases, and for the
rte_log_set_level_pattern validation case following these two cases,
differentiate the expected log level passed into macro CHECK_LEVELS
Now logs_autotest completes for all dynamic log types and static log types.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Even though test_interrupts.c can compile on Windows, skip interrupt
tests for now since majority of eal_interrupt on Windows are stubs.
Will remove the skip after interrupt being fully enabled on Windows.
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
- Replace POSIX-specific code with DPDK equivalents or
conditionally disable it on Windows
- Use NUL on Windows as /dev/null for Unix
- Exclude tests not supported on Windows yet
* multi-process
* PMD performance statistics display on signal
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
UT memory_autotest on Windows has 2 failed cases on EAL APIs
eal_memalloc_get_seg_fd and eal_memalloc_get_seg_fd_offset. These 2
APIs are not supported on Windows yet. Should return ENOTSUP such that
in test_memory.c these 2 ENOTSUP cases will not be marked as failures,
same as other ENOTSUP cases.
Fixes: 2a5d547a4a ("eal/windows: implement basic memory management")
Cc: stable@dpdk.org
Signed-off-by: Jie Zhou <jizh@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Parameters count and esize are both unsigned int, and their product can
legaly exceed unsigned int and lead to runtime access violation.
Fixes: cc4b218790 ("ring: support configurable element size")
Cc: stable@dpdk.org
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Reviewed-by: Liang Ma <liangma@liangbit.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The error value returned by rte_ring_create_elem() should be positive
integers. However, if the rte_ring_get_memsize_elem() function fails,
a negative number is returned and is directly used as the return value.
As a result, this will cause the external call to check the return
value to fail(like called by rte_mempool_create()).
Fixes: a182620042 ("ring: get size in memory")
Cc: stable@dpdk.org
Reported-by: Nan Zhou <zhounan14@huawei.com>
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When enqueueing/dequeueing to/from the ring we try to optimize by manual
loop unrolling. The check for this optimization looks like:
if (likely(idx + n < size)) {
where 'idx' points to the first usable element (empty slot for enqueue,
data for dequeue). The correct comparison here should be '<=' instead
of '<'.
This is not a functional error since we fall back to the loop with
correct checks on indexes. Just a minor suboptimal behaviour for the
case when we want to enqueue/dequeue exactly the number of elements that
we have in the ring before wrapping to its beginning.
Fixes: cc4b218790 ("ring: support configurable element size")
Fixes: 286bd05bf7 ("ring: optimisations")
Signed-off-by: Andrzej Ostruszka <amo@semihalf.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Sometimes OS tries to switch the core. So, bind the lcore thread
to a fixed core.
Implement affinity call on Windows similar to Linux.
Signed-off-by: Qiao Liu <qiao.liu@intel.com>
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Acked-by: Narcisa Vasile <navasile@linux.microsoft.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
"What gets measured gets done."
This patch adds mempool performance tests where the number of objects to
put and get is constant at compile time, which may significantly improve
the performance of these functions. [*]
Also, it is ensured that the array holding the object used for testing
is cache line aligned, for maximum performance.
And finally, the following entries are added to the list of tests:
- Number of kept objects: 512
- Number of objects to get and to put: The number of pointers fitting
into a cache line, i.e. 8 or 16
[*] Some example performance test (with cache) results:
get_bulk=4 put_bulk=4 keep=128 constant_n=false rate_persec=280480972
get_bulk=4 put_bulk=4 keep=128 constant_n=true rate_persec=622159462
get_bulk=8 put_bulk=8 keep=128 constant_n=false rate_persec=477967155
get_bulk=8 put_bulk=8 keep=128 constant_n=true rate_persec=917582643
get_bulk=32 put_bulk=32 keep=32 constant_n=false rate_persec=871248691
get_bulk=32 put_bulk=32 keep=32 constant_n=true rate_persec=1134021836
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The KNI PMD name should be "net_kni".
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Fix kni's ioctl signature to correctly match the kernel's
structs. This shaves off the (void*) casts and uses struct file*
instead of struct inode*. With the correct signature, control flow
integrity checkers are no longer confused at this point.
Signed-off-by: Markus Theil <markus.theil@secunet.com>
Tested-by: Michael Pfeiffer <michael.pfeiffer@tu-ilmenau.de>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
The Kni kthreads seem to be re-scheduled at a granularity of roughly
1 millisecond right now, which seems to be insufficient for performing
tests involving a lot of control plane traffic.
Even if KNI_KTHREAD_RESCHEDULE_INTERVAL is set to 5 microseconds, it
seems that the existing code cannot reschedule at the desired granularily,
due to precision constraints of schedule_timeout_interruptible().
In our use case, we leverage the Linux Kernel for control plane, and
it is not uncommon to have 60K - 100K pps for some signaling protocols.
Since we are not in atomic context, the usleep_range() function seems to be
more appropriate for being able to introduce smaller controlled delays,
in the range of 5-10 microseconds. Upon reading the existing code, it would
seem that this was the original intent. Adding sub-millisecond delays,
seems unfeasible with a call to schedule_timeout_interruptible().
KNI_KTHREAD_RESCHEDULE_INTERVAL 5 /* us */
schedule_timeout_interruptible(
usecs_to_jiffies(KNI_KTHREAD_RESCHEDULE_INTERVAL));
Below, we attempted a brief comparison between the existing implementation,
which uses schedule_timeout_interruptible() and usleep_range().
We attempt to measure the CPU usage, and RTT between two Kni interfaces,
which are created on top of vmxnet3 adapters, connected by a vSwitch.
insmod rte_kni.ko kthread_mode=single carrier=on
schedule_timeout_interruptible(usecs_to_jiffies(5))
kni_single CPU Usage: 2-4 %
[root@localhost ~]# ping 1.1.1.2 -I eth1
PING 1.1.1.2 (1.1.1.2) from 1.1.1.1 eth1: 56(84) bytes of data.
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=2.70 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=1.00 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=1.99 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.985 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=1.00 ms
usleep_range(5, 10)
kni_single CPU usage: 50%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.338 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.150 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.123 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.139 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.159 ms
usleep_range(20, 50)
kni_single CPU usage: 24%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.202 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.170 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.171 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.248 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.185 ms
usleep_range(50, 100)
kni_single CPU usage: 13%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.537 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.257 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.231 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.143 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.200 ms
usleep_range(100, 200)
kni_single CPU usage: 7%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.716 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.167 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.459 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.455 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.252 ms
usleep_range(1000, 1100)
kni_single CPU usage: 2%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=2.22 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=1.15 ms
Upon testing, usleep_range(1000, 1100) seems roughly equivalent in
latency and cpu usage to the variant with schedule_timeout_interruptible(),
while usleep_range(100, 200) seems to give a decent tradeoff between
latency and cpu usage, while allowing users to tweak the limits for
improved precision if they have such use cases.
Disabling RTE_KNI_PREEMPT_DEFAULT, interestingly seems to lead to a
softlockup on my kernel.
Kernel panic - not syncing: softlockup: hung tasks
CPU: 0 PID: 1226 Comm: kni_single Tainted: G W O 3.10 #1
<IRQ> [<ffffffff814f84de>] dump_stack+0x19/0x1b
[<ffffffff814f7891>] panic+0xcd/0x1e0
[<ffffffff810993b0>] watchdog_timer_fn+0x160/0x160
[<ffffffff810644b2>] __run_hrtimer.isra.4+0x42/0xd0
[<ffffffff81064b57>] hrtimer_interrupt+0xe7/0x1f0
[<ffffffff8102cd57>] smp_apic_timer_interrupt+0x67/0xa0
[<ffffffff8150321d>] apic_timer_interrupt+0x6d/0x80
This patch also attempts to remove this option.
References:
[1] https://www.kernel.org/doc/Documentation/timers/timers-howto.txt
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Padraig Connolly <Padraig.J.Connolly@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Starting in meson 0.56, the functions meson.source_root() and
meson.build_root() are deprecated and to be replaced by the [more
descriptive] functions: project_source_root()/global_source_root() and
project_build_root()/global_build_root(). Unfortunately, these new
replacement functions were only added in 0.56 release too, so to use
them we would need version checks for old/new functions to remove the
deprecation warnings.
However, the functions "current_build_dir()" and "current_source_dir()"
remain unaffected by all this, so we can bypass the versioning problem,
by saving off these values to "dpdk_source_root" and "dpdk_build_root"
in the top-level meson.build file
Bugzilla ID: 926
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Each build, meson would issue a warning reporting that the
"warning_level" setting should be used in place of adding -Wextra
directly to our build commands. Testing with meson 0.61 shows that the
only difference for gcc and clang builds between warning levels 1 and
2 is the addition of -Wextra, so we can remove the warning by deleting
our explicit set of Wextra and changing the build defaults to
warning_level 2.
Fixes: 524a0d5d66 ("build: enable extra warnings with meson")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Meson 0.61.1 is giving warnings that the calls to run_command do not
always explicitly specify if the result is to be checked or not, i.e.
there is a missing "check" parameter. This is because the default
behaviour without the parameter is due to change in the future.
We can fix these warnings by explicitly adding into each call whether
the result should be checked by meson or not. This patch therefore
adds in "check: false" to each run_command call where the result is
being checked by the DPDK meson.build code afterwards, and adds in
"check: true" to any calls where the result is currently unchecked.
Bugzilla ID: 921
Cc: stable@dpdk.org
Reported-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
The generic header file was missing
in the list of files to install.
Fixes: 9667d97c25 ("pflock: add phase-fair reader writer locks")
Cc: stable@dpdk.org
Signed-off-by: Martijn Bakker <gladdyu@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
For free buffer operation in i40e vector path, it is unnecessary to
store 'NULL' into txep.mbuf. This is because when putting mbuf into Tx
queue, tx_tail is the sentinel. And when doing tx_free, tx_next_dd is
the sentinel. In all processes, mbuf==NULL is not a condition in check.
Thus reset of mbuf is unnecessary and can be omitted.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently root table as destination is not supported.
The jump action which finally be translated to underlying root table in
rdma-core should be rejected.
Fixes: f78f747f41 ("net/mlx5: allow jump to group lower than current")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
While probing the device with unsupported class, the process should
fail because no appropriate driver was found. After traversing all
the drivers, an error value should be returned for the case.
In the previous implementation, zero value indicating probing success
was wrongly returned.
Fixes: ad435d3204 ("common/mlx5: add bus-agnostic layer")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To optimize datapath, the mlx5 pmd checked for mark action on flow
creation, and flagged possible destination rxqs (through queue/RSS
actions), then it enabled the mark action logic only for flagged rxqs.
Mark action didn't work if no queue/rss action was in the same flow,
even when the user use multi-group logic to manage the flows.
So, if mark action is performed in group X and the packet is moved to
group Y > X when the packet is forwarded to Rx queues, SW did not get
the mark ID to the mbuf.
Flag Rx datapath to report mark action for any queue when the driver
detects the first mark action after dev_start operation.
Fixes: 8e61555657 ("net/mlx5: fix shared RSS and mark actions combination")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Memory region (MR) lookup by address inside mempool MRs
was not accounting for the upper bound of an MR.
For mempools covered by multiple MRs this could return
a wrong MR LKey, typically resulting in an unrecoverable
TxQ failure:
mlx5_net: Cannot change Tx QP state to INIT Invalid argument
Corresponding message from /var/log/dpdk_mlx5_port_X_txq_Y_index_Z*:
Unexpected CQE error syndrome 0x04 CQN = 128 SQN = 4848
wqe_counter = 0 wq_ci = 9 cq_ci = 122
This is likely to happen with --legacy-mem and IOVA-as-PA,
because EAL intentionally maps pages at non-adjacent PA
to non-adjacent VA in this mode, and MLX5 PMD works with VA.
Fixes: 690b2a88c2 ("common/mlx5: add mempool registration facilities")
Cc: stable@dpdk.org
Reported-by: Wang Yunjian <wangyunjian@huawei.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch changes type from config to data for functions
called in the datapath.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Same logging messages were used for both IOTLB cache
insertion failure and IOTLB pending insertion failure.
This patch differentiate them to ease logs analysis.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch replaces multi-lines logs in multiple single-
line logs in order to ease logs filtering based on their
socket path.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch standardizes logging done in Virtio-net, so that
the Vhost-user socket path is always prepended to the logs.
It will ease log analysis when multiple Vhost-user ports
are in use.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds the Vhost socket path whenever possible in
order to make debugging possible when multiple Vhost
devices are in use. Some vhost-user layer functions are
modified to pass the device path down to the socket layer.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds the Vhost-user socket path to Vhost-user
layer logs in order to ease logs filtering.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch prepends Vhost logs with the Vhost-user socket
path when available to ease filtering logs for a given port.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds name of the device failing vDPA registration.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds IOTLB mempool name when logging debug
or error messages, and also prepends the socket path.
to all the logs.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds log for vring related info in handling of vhost message
VHOST_USER_SET_VRING_BASE, which will be useful in live migration case.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch fixes an issue that uninitialized old_rss_key
is used for restoring the rss_key.
Coverity issue: 373866
Fixes: 0c9d662070 ("net/virtio: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
The function fcntl() could return errors,
the return value need to be checked.
Fixes: 6a84c37e39 ("net/virtio-user: add vhost-user adapter layer")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
When eth_virtio_dev_init is failed, the registered virtio user memory
event cb is not released and the backend created tap device is not
destroyed. It would cause some residual tap device existed in the host
and creating a new vdev could be failed because the new virtio_user_dev
could use the same address pointer and register memory event cb to the
same address is not allowed.
Fixes: ca8326a943 ("net/virtio_user: fix error management during init")
Cc: stable@dpdk.org
Signed-off-by: Harold Huang <baymaxhuang@gmail.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Both Rx queue and Tx queue are VirtQ in virtio, VQ index is 256 for Tx
queue 128. Uint8 type of TxQ VQ index overflows and overrides Tx queue 0
data.
This patch fixes VQ index type with uint16 type.
Fixes: c1f86306a0 ("virtio: add new driver")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When the event thread polls traffic and a virtq is stopping, the FW loses
synchronization in the virtq indexes.
It causes LM failure on synchronization between the HOST indexes to
the GUEST indexes.
Unset the event thread before the queue stop in the LM process.
Fixes: 31b9c29c86 ("vdpa/mlx5: support close and config operations")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Newer generation Hardware uses the slightly different
port speed bit widths, so alter the existing port speed
bit range to extend support to the newer generation hardware
while maintaining the backward compatibility with older
generation hardware.
The previously reserved bits are now being used which
then requires the adjustment to the BIT values, e.g.:
Before:
PORT_PROPERTY_0[22:21] - Reserved
PORT_PROPERTY_0[26:23] - Supported Speeds
After:
PORT_PROPERTY_0[21] - Reserved
PORT_PROPERTY_0[26:22] - Supported Speeds
To make this backwards compatible, the existing BIT
definitions for the port speeds are incremented by one
to maintain the original position.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
Add support for a new port mode that is a backplane
connection without support for auto negotiation.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
Sometimes mailbox commands timeout when the RX data path becomes
unresponsive. This prevents the submission of new mailbox commands
to DXIO. This patch identifies the timeout and resets the RX data
path so that the next message can be submitted properly.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
Simplify and centralize the mailbox command rate change interface by
having a single function perform the writes to the mailbox registers
to issue the request.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
For each rate change command submission, the FW has to do a phy
power off sequence internally. For this to happen correctly, the
PLL re-initialization control setting has to be turned off before
sending mailbox commands and re-enabled once the command submission
is complete. Without the PLL control setting, the link up takes
longer time in a fixed phy configuration.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
Link training is always attempted when in KR mode, but the code is
structured to check if link training has been enabled before attempting
to perform it. Since that check will always be true, simplify the code
to always enable and start link training during KR auto-negotiation.
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>