Split control and datapath to make datapath substitutable and
possibly reusable with alternative control path.
libefx-based Tx datapath is bound to libefx control path, but
it should be possible to use other datapaths with alternative
control path(s).
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Split control and datapath to make datapath substitutable and
possibly reusable with alternative control path.
libefx-based Rx datapath is bound to libefx control path, but
other datapaths should be possible to use with alternative
control path(s).
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Enable Thunderx nicvf PMD driver in the common
config as it does not have build dependency
with any external library and/or architecture.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
ConnectX-5 supports enhanced version of multi-packet send (MPS). An MPS Tx
descriptor can carry multiple packets either by including pointers of
packets or by inlining packets. Inlining packet data can be helpful to
better utilize PCIe bandwidth. In addition, Enhanced MPS supports hybrid
mode - mixing inlined packets and pointers in a descriptor. This feature is
enabled by default if supported by HW.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
A tap netdevice does not support flow control; ensure nothing but
RTE_FC_NONE mode can be set.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
A tap netdevice actually receives every packet, without any filtering
whatsoever. There is no need for any multicast address registration
to receive multicast packets.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The MTU is assigned to the tap netdevice according to the argument, but
packet transmission and reception just write/read on an fd with the
default limit being the socket buffer size.
As a new rte_eth_dev_data is allocated during tap device init, ensure it
is set again dev->data->mtu.
Once the actual netdevice is created via tun_alloc(), make sure to apply
the desired MTU to the netdevice.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
As soon as the netdevice is created, update pmd->mac_addr with its
actual MAC address.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
The patch is to make MAC statistics update interval tunable
by means of 'stats_update_period_ms' kvarg parameter making
it possible to use values different from 1000 ms in case of
SFN8xxx boards provided that firmware version is 6.2.1.1033
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Support individual/group destination address match (unknown unicast
and all-multicast correspondingly in terms of firmware).
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Exact match of source and destination ports is supported by parser.
IP protocol match is enforced to UDP.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Exact match of source and destination ports is supported by parser.
IP protocol match is enforced to TCP.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Exact match of IP protocol, source and destination
addresses is supported by parser.
EtherType match is enforced to IPv6 EtherType.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Exact match of IP protocol, source and destination
addresses is supported by parser.
EtherType match is enforced to IPv4 EtherType.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Exact match of VLAN ID bits is supported only and required in VLAN item.
Mask to match VLAN ID bits only is required, default mask to match entire
TCI is not supported.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Only pattern items VOID, ETH and actions VOID, QUEUE is now
supported.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
the driver can configure emulation to run at vmxnet3 version 3, provided
the emulation advertises support for version 3.
This patch also updates release notes.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
Prior to this commit Tx checksum offload was supported only for the
inner headers.
This commit adds support for the hardware to compute the checksum for the
outer headers as well.
The support is for tunneling protocols GRE and VXLAN.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
* Add link block check for KR.
* Complete HW initialization even if SFP is not present.
* Add VF xcast promiscuous mode.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch enables i40e driver in PowerPC along with its altivec
intrinsic support.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Add KNI PMD which wraps librte_kni for ease of use.
KNI PMD can be used as any regular PMD to send / receive packets to the
Linux networking stack.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Yong Wang <yongwang@vmware.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Fixes: 0ba3870e75 ("doc: add guide to use virtio-user as exceptional path")
Cc: stable@dpdk.org
Signed-off-by: Yong Wang <wang.yong19@zte.com.cn>
Acked-by: John McNamara <john.mcnamara@intel.com>
Update the document with the available EAL command
line option to install the pmd drivers that are built
as shared libraries.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Current device hotplug is just supported by UIO managed devices.
This patch adds same functionality with VFIO.
It has been validated through tests using IOMMU and also with
VFIO and no-iommu mode.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Since there is no "descriptor done" flag like on Intel drivers, the
approach is different on mlx5 driver.
- for Tx, we call txq_complete() to free descriptors processed by
the hw, then we check if the descriptor is between tail and head
- for Rx, we need to browse the cqes, managing compressed ones,
to get the number of used descriptors.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Introduce a new API to get the status of a descriptor.
For Rx, it is almost similar to rx_descriptor_done API, except it
differentiates "used" descriptors (which are hold by the driver and not
returned to the hardware).
For Tx, it is a new API.
The descriptor_done() API, and probably the rx_queue_count() API could
be replaced by this new API as soon as it is implemented on all PMDs.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Now that the enqueue function returns the amount of space in the ring,
we can use that to replace the old watermark functionality. Update the
example app to do so, and re-enable it in the examples Makefile.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add an extra parameter to the ring dequeue burst/bulk functions so that
those functions can optionally return the amount of remaining objs in the
ring. This information can be used by applications in a number of ways,
for instance, with single-consumer queues, it provides a max
dequeue size which is guaranteed to work.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add an extra parameter to the ring enqueue burst/bulk functions so that
those functions can optionally return the amount of free space in the
ring. This information can be used by applications in a number of ways,
for instance, with single-producer queues, it provides a max
enqueue size which is guaranteed to work. It can also be used to
implement watermark functionality in apps, replacing the older
functionality with a more flexible version, which enables apps to
implement multiple watermark thresholds, rather than just one.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The bulk fns for rings returns 0 for all elements enqueued and negative
for no space. Change that to make them consistent with the burst functions
in returning the number of elements enqueued/dequeued, i.e. 0 or N.
This change also allows the return value from enq/deq to be used directly
without a branch for error checking.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Remove the watermark support. A future commit will add support for having
enqueue functions return the amount of free space in the ring, which will
allow applications to implement their own watermark checks, while also
being more useful to the app.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
There was a compile time setting to enable a ring to yield when
it entered a loop in mp or mc rings waiting for the tail pointer update.
Build time settings are not recommended for enabling/disabling features,
and since this was off by default, remove it completely. If needed, a
runtime enabled equivalent can be used.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The debug option only provided statistics to the user, most of
which could be tracked by the application itself. Remove this as a
compile time option, and feature, simplifying the code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Users compiling DPDK should not need to know or care about the arrangement
of cachelines in the rte_ring structure. Therefore just remove the build
option and set the structures to be always split. On platforms with 64B
cachelines, for improved performance use 128B rather than 64B alignment
since it stops the producer and consumer data being on adjacent cachelines.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Changes in the thread layout described, with an updated diagram.
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>