The generic RTE_LOGTYPE_PMD is a historical relic and should
not be used. Every driver must dynamic log types.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The generic RTE_LOGTYPE_PMD is a historical relic and should
not be used. Every driver should register the logtypes
for itself.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
slave aggregator_port_id is in [0, RTE_MAX_ETHPORTS-1] range.
If RTE_MAX_ETHPORTS is > 8, we can hit out of bound accesses on
agg_bandwidth[] and agg_count[] arrays.
Fixes: 6d72657ce3 ("net/bonding: add other aggregator modes")
Cc: stable@dpdk.org
Signed-off-by: Hui Zhao <zhaohui8@huawei.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
mode_bond_id and mode_band_id are slave ids, stored on 16bits.
Fixes: f8244c6399 ("ethdev: increase port id range")
Cc: stable@dpdk.org
Signed-off-by: Hui Zhao <zhaohui8@huawei.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
During PF/VF link update, a default speed value of 100M will be set
if get_link_info has failed or speed is unknown.
Consequently if PF is put in no-carrier state, VFs will switch to
"in carrier" state due to a link up + a link speed set to 100M
(default value if no speed detected).
To be consistent with linux drivers on which PF and VFs are in
same carrier state, sets default speed to undefined (instead of 100M)
and updates a link status of VF only if link is up and speed is
different from undefined.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Laurent Hardy <laurent.hardy@6wind.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The speed capability of X553 1GbE should be ETH_LINK_SPEED_1G |
ETH_LINK_SPEED_100M | ETH_LINK_SPEED_10M rather than ETH_LINK_SPEED_1G |
ETH_LINK_SPEED_10G. Correct it to fix the issue.
Fixes: e274f57322 ("ethdev: add speed capabilities")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add support for setting VLAN PCP field via rte_flow API. Hardware
overwrites the entire 16-bit VLAN TCI field. So, both VLAN VID and
PCP actions must be specified.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add support for matching VLAN fields via rte_flow API.
When matching VLAN pattern, the ethertype field in hardware
filter specification must contain VLAN header's ethertype, and
not Ethernet header's ethertype. The hardware automatically
searches for ethertype 0x8100 in Ethernet header, when
parsing incoming packet against VLAN pattern.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Remove compile time option to control Tx coalescing Latency vs
Throughput behavior. Add tx_mode_latency devarg instead, to
dynamically control Tx coalescing behavior.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Remove compile time flags and use dynamic logging for debug prints.
Also remove rarely used debug logs in register access and datapath.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Link updates come in firmware event queue, which is only created
when device starts. So, don't poll for link status if firmware
event queue is not yet created.
This fixes NULL dereference when accessing non existent firmware
event queue.
Fixes: 265af08e75 ("net/cxgbe: add link up and down ops")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Set VLAN action mode to VLAN_REWRITE only if VLAN_INSERT has not been
set yet. Otherwise, the resulting VLAN packets will have their VLAN
header rewritten, instead of pushing a new outer VLAN header.
Also fix the VLAN ID extraction logic and endianness issues.
Fixes: 1decc62b1c ("net/cxgbe: add flow operations to offload VLAN actions")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
To avoid name collisions, add cxgbe_ prefix to some global functions.
Also, make some local functions static in cxgbe_filter.c.
Fixes: ee61f5113b ("net/cxgbe: parse and validate flows")
Fixes: 9eb2c9a480 ("net/cxgbe: implement flow create operation")
Fixes: 3a381a4116 ("net/cxgbe: query firmware for HASH filter resources")
Fixes: af44a57798 ("net/cxgbe: support to offload flows to HASH region")
Fixes: 41dc98b082 ("net/cxgbe: support to delete flows in HASH region")
Fixes: 23af667f15 ("net/cxgbe: add API to program hardware layer 2 table")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
The PCI virtual function type was not recognized correctly
for ConnectX-6 VF.
Fixes: f0354d8423 ("net/mlx5: add ConnectX-6 device IDs")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The PCI virtual function type was not recognized correctly
for BlueField VF.
Fixes: f38c54571d ("net/mlx5: split PCI from generic probing")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
In the process of recovery from error CQE, when using vectorized Rx
burst, the initialization of data length in mbufs was not done.
As a result the wrong length was left written in mbuf, causing
memory overwrite or wrong error report.
This patch fixes the initialization of mbuf data length during
recovery from error CQE, when using vectorized Rx burst,
Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The txq_uar_init() routine uses the uninitialized uar_mmap_offset
field in 32-bit configurations due to this field is initialized
after txq_uar_init() call.
Fixes: 120dc4a7dc ("net/mlx5: remove device register remap")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
TSO (TCP Segmentation Offload) is enabled by default on vhost.
Add the ability to disable TSO on vhost.
The user should also disable the feature on the virtual machine's xml.
Signed-off-by: Noa Ezra <noae@mellanox.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Matan Azrad <matan@mellanox.com>
Add Igor Chauskin from the Amazon as another maintainer of the driver.
Igor is another person from the Amazon team that is responsible for the
ENA DPDK driver.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Explicitly document that the MAC/VLAN filtering in virtio
is best effort to help users understand why unwanted packets
could still arrive.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
If reserve virtio header room by function rte_pktmbuf_prepend, both
segment data length and packet length of mbuf will be increased.
Data length will be equal to descriptor length, while packet length
should be decreased as virtio-net header won't be taken into packet.
Thus will cause mismatch in mbuf structure. Fix this issue by access
mbuf data directly and increase descriptor length if it is needed.
Fixes: 58169a9c81 ("net/virtio: support Tx checksum offload")
Fixes: 892dc798fa ("net/virtio: implement Tx path for packed queues")
Fixes: 4905ed3a52 ("net/virtio: optimize Tx enqueue for packed ring")
Fixes: e5f456a98d ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The loop to read packets does not take all packets as the number of
available packets (nb_used) is decremented in the loop.
Take all available packets provides a performance improvement of 3%.
Fixes: fc3d66212f ("virtio: add vector Rx")
Cc: stable@dpdk.org
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
With vectorized functions, only the rx stats for number of packets is
incremented.
Update also the other statistics.
Performance impact is about 2%
Fixes: fc3d66212f ("virtio: add vector Rx")
Cc: stable@dpdk.org
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Only the mapping of the vring addresses is being ensured. This causes
errors when the vring size is larger than the IOTLB page size. E.g:
queue sizes > 256 for 4K IOTLB pages
Ensure the entire vring memory range gets mapped. Refactor duplicated
code for for IOTLB UPDATE and IOTLB INVALIDATE and add packed virtqueue
support.
Fixes: 09927b5249 ("vhost: translate ring addresses when IOMMU enabled")
Cc: stable@dpdk.org
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Check whether space are enough before burst enqueue operation. If more
space is needed, will try to clean up used descriptors for space on
demand. It can give more chances to free used descriptors, thus will
help RFC2544 performance. Also deduct failed xmit packets from total
xmit number.
Fixes: e5f456a98d ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When doing xmit in-order enqueue, packets are buffered and then flushed
into avail ring. Buffered packets can be dropped due to insufficient
space. Moving stats update action just after successful avail ring
updates can guarantee correctness.
Fixes: e5f456a98d ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In some situations, when a virtual machine is starting,
vring_state_changed can be called while there was no change in the
queue state. This fix makes sure that there was really a change in the
queue state before calling the callback for EVENT_QUEUE_STATE.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Cc: stable@dpdk.org
Signed-off-by: Noa Ezra <noae@mellanox.com>
Reviewed-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Besides the enqueue/dequeue API, other APIs of the builtin net
backend should also be protected.
Fixes: a368804699 ("vhost: protect active rings from async ring changes")
Cc: stable@dpdk.org
Reported-by: Peng He <xnhp0320@icloud.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When live migration starts, QEMU will set ring addrs again for
each virtqueue. In this case, we should try to translate ring
addrs after we invalidating the ring, otherwise virtqueues can
be enabled with the addrs untranslated. Besides, also leverage
the access_ok flag in non-IOMMU case to prevent the data path
accessing invalidated virtqueues.
Fixes: 5a4933e56b ("vhost: postpone ring address translations at kick time only")
Cc: stable@dpdk.org
Reported-by: Yilong Lv <lvyilong.lyl@alibaba-inc.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When the device has been started, don't do the reallocation anymore.
Otherwise the pointers used in application threads can be invalidated
without proper protection. Instead of introducing a global lock to
protect the change of device pointers which will hurt the performance,
let's just do the reallocation during setup.
Fixes: af295ad469 ("vhost: realloc device and queues to same numa node as vring desc")
Cc: stable@dpdk.org
Reported-by: Yinan Wang <yinan.wang@intel.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This function is listed under EXPERIMENTAL in the
rte_vhost_version.map, so it needs to be marked
with __rte_experimental in the header file as well.
Found by check-experimental-syms.sh when trying to compile
DPDK with -finstrument-functions. This script didn't
catch this in the normal case, since the function is
declared __rte_always_inline.
This also requires updating the vhost_scsi example to allow
use of this newly marked experimental API.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private resources
for the port can be freed by rte_eth_dev_close().
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private resources
for the port can be freed by rte_eth_dev_close().
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private resources
for the port can be freed by rte_eth_dev_close().
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private resources
for the port can be freed by rte_eth_dev_close().
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private resources
for the port can be freed by rte_eth_dev_close().
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
This patch adds multiple process support for hns3 PMD driver.
Multi-process support selection queue by configuring RSS or
flow director. The primary process supports various management
ops, and the secondary process only supports queries ops.
The primary process notifies the secondary processes to start
or stop tranceiver.
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds reset related process for hns3 PMD driver.
The following three scenarios will trigger the reset process,
and the driver settings will be restored after the reset is
successful:
1. Receive a reset interrupt
2. PF receives a hardware error interrupt
3. VF is notified by PF to reset
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds stats_get, stats_reset, xstats_get, xstats_get_names
xstats_reset, xstats_get_by_id and xstats_get_names_by_id related
function codes.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds abnormal interrupt process for hns3 PMD driver,
the interrupt reported by NIC hardware.
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds get_reg related function codes for hns3 PMD driver.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds queue related operation, package sending and
receiving function codes.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds support for hns3 VF PMD driver.
In current version, we only support VF device is bound to vfio_pci or
igb_uio and then driven by DPDK driver when PF is driven by kernel mode
hns3 ethdev driver, VF is not supported when PF is driven by DPDK
driver.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@hisilicon.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds support for mailbox of hns3 PMD driver, mailbox is
used for communication between PF and VF driver.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Hao Chen <chenhao164@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>