The vdpa sample application creates vhost-user sockets by using the
vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
virtio ring compatible devices to serve virtio driver directly to enable
datapath acceleration. As vDPA driver can help to set up vhost datapath,
this application doesn't need to launch dedicated worker threads for vhost
enqueue/dequeue operations.
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add enetc usage document to compile and run the
DPDK application on enetc supported platform.
This document introduces the enetc driver, supported
platforms and supported features.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add callback for updating information about link status/info.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Zyta Szpak <zr@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add part of PMD for actual reception/transmission.
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Dmitri Epshtein <dima@marvell.com>
Signed-off-by: Zyta Szpak <zr@semihalf.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Support counter action for 1400 series adapters.
The adapter API for allocating and freeing counters is independent of
the adapter match/action API. If the filter action is requested, a
counter is first allocated and then assigned to the filter, and when
the filter is deleted, the counter must also be deleted.
Counters are DMAd to pre-allocated consistent memory periodically,
controlled by the define VNIC_FLOW_COUNTER_UPDATE_MSECS. The default is
100 milliseconds.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
When a device is added with a devargs (hotplug or whitelist),
the bus pointer can be retrieved via its devargs.
But there is no such devargs.bus in case of standard scan.
A pointer to the rte_bus handle is added to rte_device.
When a device is allocated (during a scan),
the pointer to its bus is assigned.
It will make possible to remove a rte_device,
using the function pointer from its bus.
The function rte_bus_find_by_device() becomes useless,
and may be removed later.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
The function rte_devargs_remove(), which is intended to be internal,
can take a devargs structure as argument.
The matching is still using string comparison of bus name and
device name.
It is simpler and may allow a different devargs matching in future.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
rte_eal_parse_devargs_str() does not support parsing the bus name
at the start of devargs. So it was renamed and deprecated.
rte_eal_devargs_add(), rte_eal_devargs_type_count() and
rte_eal_devargs_dump() were declared deprecated and had their
implementation body renamed.
All these functions were deprecated in release 18.05.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Currently, mempools can only be allocated either using native
DPDK memory, or anonymous memory. This patch will add two new
methods to allocate mempool using external memory (regular or
hugepage memory), and add documentation about it to testpmd
user guide.
It adds a new flag "--mp-alloc", with four possible values:
native (use regular DPDK allocator), anon (use anonymous
mempool), xmem (use externally allocated memory area), and
xmemhuge (use externally allocated hugepage memory area). Old
flag "--mp-anon" is kept for compatibility.
All external memory is allocated using the same external heap,
but each will allocate and add a new memory area.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Add API to allow creating new malloc heaps. They will be created
with socket ID's going above RTE_MAX_NUMA_NODES, to avoid clashing
with internal heaps.
This breaks the ABI, so document the change.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
We will need to refer to external heaps in some way. While we use
heap ID's internally, for external API use it has to be something
more user-friendly. So, we will be using a string to uniquely
identify a heap.
This breaks the ABI, so document the change.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
We will be assigning "invalid" socket ID's to external heap, and
malloc will now be able to verify if a supplied socket ID is in
fact a valid one, rendering parameter checks for sockets
obsolete.
This changes the semantics of what we understand by "socket ID",
so document the change in the release notes.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Switch over all parts of EAL to use heap ID instead of NUMA node
ID to identify heaps. Heap ID for DPDK-internal heaps is NUMA
node's index within the detected NUMA node list. Heap ID for
external heaps will be order of their creation.
This breaks the ABI, so document the changes.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
When we allocate and use DPDK memory, we need to be able to
differentiate between DPDK hugepage segments and segments that
were made part of DPDK but are externally allocated. Add such
a property to memseg lists.
This breaks the ABI, so document the change in release notes.
This also breaks a few internal assumptions about memory
contiguousness, so adjust malloc code in a few places.
All current calls for memseg walk functions were adjusted to
ignore external segments where it made sense.
Mempools is a special case, because we may be asked to allocate
a mempool on a specific socket, and we need to ignore all page
sizes on other heaps or other sockets. Previously, this
assumption of knowing all page sizes was not a problem, but it
will be now, so we have to match socket ID with page size when
calculating minimum page size for a mempool.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Previously, to calculate length of memory area covered by a memseg
list, we would've needed to multiply page size by length of fbarray
backing that memseg list. This is not obvious and unnecessarily
low level, so store length in the memseg list itself.
This breaks ABI, so bump the EAL ABI version and document the
change. Also, while we're breaking ABI, pack the members a little
better.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
This patch add support to use select call with qman portal fd
for timeout based dequeue request for eventdev.
If there is a event available qman portal fd will be set
and the function will be awakened. If no event is available,
it will only wait till the given timeout value.
In case of interrupt the timeout ticks are used as usecs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Currently, command-line switches for legacy mem mode or single-file
segments mode are only stored in internal config. This leads to a
situation where these flags have to always match between primary
and secondary, which is bad for usability.
Fix this by storing these flags in the shared config as well, so
that secondary process can know if the primary was launched in
single-file segments or legacy mem mode.
This bumps the EAL ABI, however there's an EAL deprecation notice
already in place[1] for a different feature, so that's OK.
[1] http://patches.dpdk.org/patch/43502/
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Make the ethernet port id passed into
rte_event_eth_rx_adapter_caps_get() 16 bit.
Also, update the event rx adapter test to use 16 bit
ethernet port ids.
Fixes: c2189c907d ("eventdev: make ethdev port identifiers 16-bit")
Cc: stable@dpdk.org
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Add programmer's guide doc to explain the use of the
Event Ethernet Tx Adapter library.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The ethernet Tx adapter abstracts the transmit stage of an
event driven packet processing application. The transmit
stage may be implemented with eventdev PMD support or use a
rte_service function implemented in the adapter. These APIs
provide a common configuration and control interface and
an transmit API for the eventdev PMD implementation.
The transmit port is specified using mbuf::port. The transmit
queue is specified using the rte_event_eth_tx_adapter_txq_set()
function.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The DSW event device is documented in DPDK Programmer's Guide.
The MAINTAINERS file and the 18.11 release notes are updated.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The below change set suppose to bump the eventdev shared library version.
It missed updating the version number so fixing it now.
Fixes: 3810ae4357 ("eventdev: add interrupt driven queues to Rx adapter")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Adding 3DES-ECB & AES-XTS to the list of ciphers supported
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Update the QAT documentation to show that it supports CCM.
Fixes: ab56c4d9ed ("crypto/qat: support AES-CCM")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Cel <tomaszx.cel@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
LB stans for 'Linear Buffers'. For the 'Scatter-gatter list' we have
SGL acronym.
Fixes: 2717246ecd ("cryptodev: replace mbuf scatter gather flag")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Update the doc for virtio-user to use the latest testpmd
parameters and commands.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Document MTR (metering) and TM (traffic management) usage plus
do some small updates here and there.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
This patch introduces necessary changes required by MUSDK 18.09 library.
* As of MUSDK 18.09, pp2_cookie_t is no longer available. Now
RX descriptor cookie is defined as plain u64 so existing cast
is no longer valid.
* MUSDK 18.09 increased number of available bpools (buffer hw pools) by
introducing dma regions support. Update mvpp2 driver accordingly.
* replace MV_NET_IP4_F_TOS with MV_NET_IP4_F_DSCP
Before this patch, API allowed to configure a classification rule
according to IPv4 TOS, which was not supported in classifier. This patch
fixes this by using proper field.
* use 48 bit address mask
We cannot get pointers exceeding 48 bits thus using 48 bit
mask for extracting higher IOVA address bits is enough.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Yuval Caduri <cyuval@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Shlomi Gridish <sgridish@marvell.com>
Reviewed-by: Alan Winkowski <walan@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Change QoS configuration file syntax for port's default policer
setup.
Since default policer configuration is performed before
any other policer configuration we can pick a default id.
This simplifies default policer configuration since user
no longer has to choose ids from range [0, PP2_CLS_PLCR_NUM].
Explicitly document values for rate_limit_enable field.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
set_mc_addr_list method is not implemented by the driver yet.
Fixes: a46f8d584e ("net/failsafe: add fail-safe PMD")
Cc: stable@dpdk.org
Signed-off-by: Evgeny Im <evgeny.im@oktetlabs.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
For IA, the AVX2 vector path is only recommended to be used on later
platforms (identified by AVX512 support, like SKL etc.) This is because
performance benchmark shows downgrade when running AVX2 vector path on
early platform (BDW/HSW) in some cases. But we still observe perf gain
with some real work loading.
So this patch introduced the new devarg use-latest-supported-vec to
force the driver always selecting the latest supported vec path. Then
apps are able to take AVX2 path on early platforms. And this logic can
be re-used if we will have AVX512 vec path in future.
This patch only affects IA platforms. The selected vec path would be
like the following:
Without devarg/devarg = 0:
Machine vPMD
AVX512F AVX2
AVX2 SSE4.2
SSE4.2 SSE4.2
<SSE4.2 Not Supported
With devarg = 1
Machine vPMD
AVX512F AVX2
AVX2 AVX2
SSE4.2 SSE4.2
<SSE4.2 Not Supported
Other platforms can also apply the same logic if necessary in future.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
__rte_mbuf_raw_free and __rte_pktmbuf_prefree_seg have been deprecated for
a long time now (early 17.05), are not part of the abi and are easily
replaced with existing api.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This will allow the same config file to be used from Meson.
The result has been verified to be identical via diffoscope.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This will make it possible to generate the file in the same way from
Meson as well.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The QAT compression driver was named "qat".
Rename to compress_qat for consistency with other compressdev drivers
and with crypto_qat.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Added description of the build configuration options for QAT.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Update PMD build section.
Linked to kernel dependency section and refactored text
between those 2 sections.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Add overview of QAT doc sections and link between them.
Indent to next level all sections within
the crypto and common sections.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Add section to common QAT part of doc about
which tests can be used to exercise
QAT compress and crypto PMDS
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Integrate accelerated networking support into netvsc PMD.
This allows netvsc to manage VF without using failsafe or vdev_netvsc.
For the exception vswitch path some tests like transmit
get a 22% increase in packets/sec.
For the VF path, the code is slightly shorter but has no
real change in performance.
Pro:
* using netvsc is more like other DPDK NIC's
* the exception packet uses less CPU
* much smaller code size
* no locking required on VF transmit/receive path
* no legacy Linux network device to get mangled by userspace
* much simpler (1K vs 9K) LOC
* unified extended statistics
Con:
* using netvsc has more complex startup model
* no bifurcated driver support
* no flow support (since host does not have flow API).
* no tunnel offload support
* no receive interrupt support
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Implement callback functionality on link state changes.
This is not really driven off of interrupt file descriptor like most other
PMD's. Instead, it happens when a link state change message arrives
in the common ring buffer.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.
PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.
Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Add flow operations to match packets based on destination MAC address.
Allocate and program hardware MPS table with the destination MAC
address to be matched against. The returned MPS index is then used while
offloading flows to LETCAM (maskfull) and HASH (maskless) filter regions.
Also update existing mac_addr_set() to use the new MPS table API.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add flow API operations to offload vlan push, pop, and rewrite actions.
For vlan push or rewrite actions, allocate and program an entry from
L2T table. Use the L2T index to program vlan actions for LETCAM
(maskfull) and HASH (maskless) filters.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Originally vhost_crypto sample application only supports single
core. This patch adds the multi-core support with more flexible
options.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch fixes wrong usage of bind command in vhost.rst.
Using "dpdk-devbind.py -b=uio_pci_generic 0000:00:04.0" gives
an error of "unbind failed". It should be "-b uio_pci_generic" so
it will work correctly.
Fixes: a971c509a5 ("doc: update vhost sample guide")
Cc: stable@dpdk.org
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Acked-by: Zhiyong Yang <zhiyong.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Current code assumes a MAC change can occur when the port has been
started. In fact, there are some NICs which require this port state
for being successful, but other NICs not always support MAC change
in that case.
This patch supports a new device flag for a device advertising this
limitation, and if the flag is set, the MAC is changed before the
port starts.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
On ConnectX-4 Lx the Multi Packet Send (MPW) feature is considered
un-secure, as on some cases were the application provides incorrect mbufs
on the Tx burst the host or NIC can get stuck.
Hence, disabling the feature by default for this specific NIC.
Users can still enable this feature and enjoy the performance gain
(mostly for low number of cores) by using the txq_mpw_en devarg.
This patch will impact the out of the box performance of some application
using ConnectX-4 Lx for the sack of security and robustness.
Since we need different defaults based on the underlying device the mpw
field in the configuration struct was extended to contain also the
MLX5_ARG_UNSET option.
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Add tx_done_cleanup ethdev hook to allow application to
control if/when it wants completions to be handled.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
The patch changes rx_burst profiling approach:
1. VTune's instrumentation is removed
2. empty hook callback for profiling is added
This way all VTune-specific logic moves to the VTune side.
Hook is enabled only when CONFIG_RTE_ETHDEV_PROFILE_WITH_VTUNE option
is turned on. VTune uses this hook to attach to the polling cycle. It
is not possible to attach to the rx_burst directly, as it is inline.
Signed-off-by: Ilia Kurakin <ilia.kurakin@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Start version numbering for a new release cycle,
and introduce a template file for release notes.
The release notes comments have a new block to suggest
the order of items, inspired by Ferruh's proposal.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
using "The DPDK Contributors" as decided by techboard.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Added multithreading related description into programmer
guide of hash library.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Very simple version of vhost-user driver in vhost sample will be used if
builtin-net-driver option is enabled. This driver is based on generic
vhost lib APIs. Unfortunately, the implementation is incompatible with
QEMU as protocol feature is not supported.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Experimental API text has been moved into a sub-section of ABI Policy.
A paragraph has been added to explain the process for removal of an
experimental tag.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The devargs parsing function has changed in 18.08.
The devargs rework should be completed in 18.11.
Fixes: a23bc2c4e0 ("devargs: add non-variadic parsing function")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The SVG images must be referenced without their extension,
because it is converted to PNG for PDF.
Fixes: 54c4cbb6cc ("doc: add graphics to bbdev guide")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Since netvsc PMD does not support SR-IOV accelerated networking,
it is not recommended for use on Azure. Make this clear in the
guide.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
The old offload API is removed in 18.08,
so the library version must be increased,
in order to show the incompatibility with 18.05 one.
Fixes: ab3ce1e0c1 ("ethdev: remove old offload API")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Due to the upcoming external memory support [1], some API and ABI
changes will be required. In addition, although the changes called
out in the deprecation notice are not yet present in form of code
in the published RFC itself, they are based on consensus on the
mailing list [2] on how to best implement this feature.
[1] http://patches.dpdk.org/project/dpdk/list/?series=453&state=*
[2] https://mails.dpdk.org/archives/dev/2018-July/108002.html
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Hotplug functions should be used directly to add and remove devices.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
These functions are buggy from the very beginning and should not be used.
Generic EAL hotplug mechanisms should be used instead.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Functions rte_mempool_populate_phys(), rte_mempool_virt2phy() and
rte_mempool_populate_phys_tab() are just wrappers for corresponding
IOVA functions and were deprecated in v17.11.
Functions rte_mempool_xmem_create(), rte_mempool_xmem_size(),
rte_mempool_xmem_usage() and rte_mempool_populate_iova_tab() were
deprecated in v18.05 and removal was announced earlier in v18.02.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
l2fwd_fork relies on a multiprocess model that DPDK does not support
(calling rte_eal_init() before fork()), in particular in light of recent
EAL changes like the multiproess communication channel.
This example can mislead users into thinking this is a supported
multiprocess model; hence, this commit removes this example and the
corresponding user guide documentation as well.
This patch was made following this mailing list discussion:
http://mails.dpdk.org/archives/dev/2018-July/108106.html
Signed-off-by: Gage Eads <gage.eads@intel.com>
Add a note to the 'link' command in the IP Pipeline documentation
specifying the PCI device name format required to run the application.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
rte_ring implementation is not preemptible only under certain
circumstances. This clarification is helpful for data plane and
control plane communication using rte_ring.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Without this patch, testpmd command to config Rx offload keep_crc
would fail and report "Bad argument".
This patch also fix the command to config the Tx offload mbuf_fast_free.
Fixes: 70815c9eca ("ethdev: add new offload flag to keep CRC")
Fixes: c73a907187 ("app/testpmd: add commands to test new offload API")
Cc: stable@dpdk.org
Signed-off-by: Wei Dai <wei.dai@intel.com>
Tested-by: Yuan Peng <yuan.peng@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Add suggested DPDK/kernel driver/firmware version matching list for i40e.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This reverts the patch that enabled mbuf fast free.
There are two main reasons.
First, enic_fast_free_wq_bufs is broken. When
DEV_TX_OFFLOAD_MBUF_FAST_FREE is enabled, the driver calls this
function to free transmitted mbufs. This function currently does not
reset next and nb_segs. This is simply wrong as the fast-free flag
does not imply anything about next and nb_segs.
We could fix enic_fast_free_wq_bufs by making it to call
rte_pktmbuf_prefree_seg to reset the required fields. But, it negates
most of cycle saving.
Second, there are customer applications that blindly enable all Tx
offloads supported by the device. Some of these applications do not
satisfy the requirements of mbuf fast free (i.e. a single pool per
queue and refcnt = 1), and end up crashing or behaving badly.
Fixes: bcaa54c1a1 ("net/enic: support mbuf fast free offload")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add information on the ability of guest app to sent
a policy to the host app.
Add information on the branch ratio out-of-band method
of workload monitoring and power management.
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
With mlx5, unlike normal flow rules implemented through Verbs for traffic
emitted and received by the application, those targeting different logical
ports of the device (VF representors for instance) are offloaded at the
switch level and must be configured through Netlink (TC interface).
This patch adds preliminary support to manage such flow rules through the
flow API (rte_flow).
Instead of rewriting tons of Netlink helpers and as previously suggested by
Stephen [1], this patch introduces a new dependency to libmnl [2]
(LGPL-2.1) when compiling mlx5.
[1] https://mails.dpdk.org/archives/dev/2018-March/092676.html
[2] https://netfilter.org/projects/libmnl/
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Make a few updates in preparation for 18.08.
- Use SPDX
- Add 1400 series VIC adapters to supported models
- Describe the VXLAN port number
- Expand the description for ig-vlan-rewrite
- Add inner RSS and checksum to the features
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet. This same global structure will
then be used by the flow command line in testpmd when the action
nvgre_encap will be parsed, at this point, the conversion into such
action becomes trivial.
This global structure is only used for the encap action.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd
does not allocate memory, this patch adds a new command in testpmd to
initialise a global structure containing the necessary information to
make the outer layer of the packet. This same global structure will
then be used by the flow command line in testpmd when the action
vxlan_encap will be parsed, at this point, the conversion into such
action becomes trivial.
This global structure is only used for the encap action.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Tested-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Add Octeontx ZIP PMD feature specification and user guide
with build and run instructions.
Signed-off-by: Ashish Gupta <ashish.gupta@caviumnetworks.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Signed-off-by: Sunila Sahu <sunila.sahu@caviumnetworks.com>
Add zlib pmd feature support and user guide with
build and run instructions
Signed-off-by: Sunila Sahu <sunila.sahu@caviumnetworks.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Signed-off-by: Ashish Gupta <ashish.gupta@caviumnetworks.com>
This patch adds chained mbuf support for input or output buffers
during compression/decompression operations.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds packet-size-distr mode specific parameter parser
to support different threshold packet size value other than default
128 bytes.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
This patch adds Scatter-Gather List (SGL) feature to
QAT compression PMD.
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Added support for 3DES cipher algorithm which
will support 8, 16 and 24 byte keys, which also has been
added in the v0.50 of the IPSec Multi-buffer lib.
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds support for the v0.50 of the IPsec Multi-buffer lib.
The library now exposes its version, with the idea
of maintaining backwards compatibility in the future,
avoiding breaking the compilation of the PMD every time
there is a new version available.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds support for the v0.50 of the IPsec Multi-buffer lib.
The library now exposes its version, with the idea
of maintaining backwards compatibility in the future,
avoiding breaking the compilation of the PMD every time
there is a new version available.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Extend QAT guide to cover crypto and compression and common
information, particularly about kernel driver dependency.
Update release note.
Update compression feature list for qat.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
udp_cksum is duplicated, second one should be tcp_cksum
Fixes: c73a907187 ("app/testpmd: add commands to test new offload API")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch adds support for an additional bus type Virtual Machine BUS
(VMBUS) on Microsoft Hyper-V in Windows 10, Windows Server 2016
and Azure. Most of this code was extracted from FreeBSD and some of
this is from earlier code donated by Brocade.
Only Linux is supported at present, but the code is split
to allow future FreeBSD and Windows support.
The bus support relies on the uio_hv_generic driver from Linux
kernel 4.16. Multiple queue support requires additional sysfs
interfaces which is in kernel 5.0 (a.k.a 4.17).
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
This patch adds support for building and running mlx5 PMD on
32bit systems such as i686.
The main issue to tackle was handling the 32bit access to the UAR
as quoted from the mlx5 PRM:
QP and CQ DoorBells require 64-bit writes. For best performance, it
is recommended to execute the QP/CQ DoorBell as a single 64-bit write
operation. For platforms that do not support 64 bit writes, it is
possible to issue the 64 bits DoorBells through two consecutive
writes,
each write 32 bits, as described below:
* The order of writing each of the Dwords is from lower to upper
addresses.
* No other DoorBell can be rung (or even start ringing) in the midst
of an on-going write of a DoorBell over a given UAR page.
The last rule implies that in a multi-threaded environment, the access
to a UAR page (which can be accessible by all threads in the process)
must be synchronized (for example, using a semaphore) unless an atomic
write of 64 bits in a single bus operation is guaranteed. Such a
synchronization is not required for when ringing DoorBells on different
UAR pages.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Prior to this patch, all port representors detected on a given device were
probed and Ethernet devices instantiated for each of them.
This patch adds support for the standard "representor" parameter, which
implies that port representors are not probed by default anymore, except
for the list provided through device arguments.
(Patch based on prior work from Yuanhan Liu)
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Xueming Li <xuemingl@mellanox.com>
As per deprecation notice [1], move DPDK runtime config to default
DPDK runtime data location. Also, remove the deprecation notice and
update release notes to indicate the changes.
[1] http://dpdk.org/patch/40418
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
For asynchronous requests, user callback may be triggered either from
IPC thread or from interrupt thread. Because of this, delivery of
other interrupt-based events such as alarms may not be possible inside
the asynchronous IPC request callback handler. Document this
limitation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Previously, it was possible to limit maximum amount of memory
allowed for allocation by creating validator callbacks. Although a
powerful tool, it's a bit of a hassle and requires modifying the
application for it to work with DPDK example applications.
Fix this by adding a new parameter "--socket-limit", with syntax
similar to "--socket-mem", which would set per-socket memory
allocation limits, and set up a default validator callback to deny
all allocations above the limit.
This option is incompatible with legacy mode, as validator callbacks
are not supported there.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add ability for application to register a callback function
for SW transfers, the callback can decide which packets can
be enqueued to the event device.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Add support for interrupt driven queues when eth device is
configured for rxq interrupts and servicing weight for the
queue is configured to be zero.
A interrupt driven packet received counter has been added to
rte_event_eth_rx_adapter_stats.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
This patch updates the programmer guide and testpmd user guide for
UDP/IPv4 GSO.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Set the starting point that all commits on master branch
with Fixes tag should be backported to relevant stable/LTS
branches, and explain that the submitter may indicate it is
not suitable for backport.
Of course there will be exceptions that will crop up from time
to time that need discussion, so also add a sentence for that.
This is to ensure that there is consistency between what is
backported to stable/LTS branches, remove some subjectivity
as to what constitutes "a fix" and avoid possible conflicts
for future backports.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This is the guide for cross compiling ARM64 DPDK from X86 hosts.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Enabling crypto devs to specify the minimum headroom and tailroom it
expects in the mbuf. For net PMDs, standard headroom has to be honoured
by applications, which is not strictly followed for crypto devs. This
prevents crypto devs from using free space in mbuf (available as
head/tailroom) for internal requirements in crypto operations. Addition
of head/tailroom requirement will help PMDs to communicate such
requirements to the application.
The availability and use of head/tailroom is an optimization if the
hardware supports use of head/tailroom for crypto-op info. For devices
that do not support using the head/tailroom, they can continue to operate
without any performance-drop.
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The name private_data is confusing in these APIs:
rte_cryptodev_sym_session_set_private_data()
rte_cryptodev_sym_session_get_private_data()
It refers to data added at the end of the session hdr for
use by the application.
The session already contains sess_private_data[index]
which is used to store private pmd data and most references to private
data refer to that.
e.g. external apis
rte_cryptodev_sym_get_private_session_size() and internal
set/get_session_private_data() refer to sess_private_data[].
So rename to user_data, i.e.
rte_cryptodev_sym_session_set_user_data()
rte_cryptodev_sym_session_get_user_data()
Refers to changes introduced here:
https://patches.dpdk.org/patch/38172/
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
As announced in the previous release,
The API to attach/dettach a session to a queue pair
is removed, as it was only used in DPAA, and it is not
actually needed.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The current mbuf scatter gatter feature flag is
too ambiguous, as it is not clear if input and/or output
buffers can be scatter gather mbufs or not, plus
if in-place and/or out-of-place is supported.
Therefore, five new flags will replace this flag:
- RTE_CRYPTODEV_FF_IN_PLACE_SGL
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT
- RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Removed rte_cryptodev_get_header_session_size
and rte_cryptodev_get_private_session_size functions,
as they have been substituted with functions
specific for symmetric operations, with _sym_ word
after "rte_cryptodev_".
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Removed cryptodev queue start/stop functions,
as they were marked deprecated in 18.05, since they
were not implemented by any driver.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
In release 18.05, a deprecation notice to remove the `sym`
structure in the cryptodev info structure was sent.
However, only one of the fields inside the structure will
be removed, so the notice is not actually correct.
In any case, it needs to be removed.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Most crypto PMDs do not have a limitation
of the number of the sessions that can be handled
internally. The value that was set before was not
actually used at all, since the sessions are created
at the application level.
Therefore, this value is not parsed from the initial
crypto parameters anymore and it is set to 0,
meaning that there is no actual limit.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Currently, the info structure contains the maximum number
of sessions that a device can manage.
This field was useful when the session mempool was created inside
each device, but now it is created at the application level.
Most PMDs do not have a limitation on the sessions managed,
but a few do, therefore this field must remain in the structure.
However, a new value, 0, can be used to indicate that
a device does not have an actual maximum of sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Structure rte_cryptodev_info has currently PCI device
information ("struct rte_pci_device") in it.
This information is not generic to all devices,
so this gets replaced with the generic "rte_device" structure,
compatible with all crypto devices.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The current mbuf scatter gather feature flag is
too ambiguous, as it is not clear if input and/or output
buffers can be scatter gather mbufs or not.
Therefore, three new flags will replace this flag:
- RTE_COMP_FF_OOP_SGL_IN_SGL_OUT
- RTE_COMP_FF_OOP_SGL_IN_FB_OUT
- RTE_COMP_FF_OOP_LB_IN_SGL_OUT
Note that out-of-place flat buffers is supported by default
and in-place is not supported by the library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Renamed feature "Bypass" to "Pass-through",
as it is a more explicit name, meaning that the PMD
is capable of passing the mbufs through it,
without making any modifications (i.e.. NULL algorithm).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
In PMD feature matrices (.ini files), it is not required to
have the list of features that are not supported,
just the ones that are.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Fixes: 3d12dceed2df ("ethdev: add new offload flag to keep CRC")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
In DPDK 17.11, the ethdev offloads API has changed:
commit cba7f53b71 ("ethdev: introduce Tx queue offloads API")
commit ce17eddefc ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload
For reminder, the main concepts in the new API were:
- All offloads are disabled by default
- Distinction between per port and per queue offloads.
The transition bits are now removed:
- Translation of the old API in ethdev
- rte_eth_conf.rxmode.ignore_offload_bitfield
- ETH_TXQ_FLAGS_IGNORE
The old API bits are now removed:
- Rx per-port rte_eth_conf.rxmode.[bit-fields]
- Tx per-queue rte_eth_txconf.txq_flags
- ETH_TXQ_FLAGS_NO*
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
Some test applications and examples were not converted
to the new offload API introduced in 17.11.
For reference, see "Hardware Offload" in
doc/guides/prog_guide/poll_mode_drv.rst
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The example code is showing how to use KNI, and can be found in
examples/kni/
The documentation guide for this example is explaining the code
to ease the understanding of the example.
And inside this documentation, there are a lot of examples code
which are copy/pasted. It is really too much and hard to maintain.
The code inside this documentation is replaced by the name
of the functions.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Support rx of in direction packets only
Useful for apps that also tx to eth_pcap ports in order to not see them
echoed back in as rx when out direction is also captured
Example:
In case using rx_iface and sending *single* packet to eth1
it will loop forever as the when it is sent to tx_iface=eth1
it will be captured again on the rx_iface=eth1 and so on
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 758 RX-dropped: 0 RX-total: 758
TX-packets: 758 TX-dropped: 0 TX-total: 758
------------------------------------------------------------------
While if using rx_iface_in it will not be captured on the way out and
be forwarded only once
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface_in=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 1 TX-dropped: 0 TX-total: 1
------------------------------------------------------------------
Signed-off-by: Ido Goshen <ido@cgstowernetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add a new devarg "ig-vlan-rewrite" to allow the user to set
non-default rewrite mode. The UCS VIC may add/remove/modify the VLAN
header of an ingress packet depending on the ingress VLAN rewrite
mode.
By default, the driver sets the pass-through mode, which tells the NIC
"do not touch VLAN header and preserve it as is". This mode is usually
sufficient, but can complicate deployments for certain environments.
For example, OVS-DPDK in UCS blade environments may want to use "untag
default VLAN mode", which removes the VLAN header from an ingress
packet if it matches vNIC's default VLAN.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
rte_eth_rx_descritpr_status and rte_eth_tx_descriptor_status
are supported by fm10K.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For i40evf, internal rx interrupt and adminq interrupt share the same
source, that cause a lot cpu cycles be wasted on interrupt handler
on rx path. This is complained by customers which require low latency
(when set I40E_ITR_INTERVAL to small value), but have to be sufferred by
tremendous interrupts handling that eat significant CPU resources.
The patch disable pci interrupt and remove the interrupt handler,
replace it with a low frequency (50ms) interrupt polling daemon
which is implemented by registering a alarm callback periodly, this
save CPU time significently: On a typical x86 server with 2.1GHz CPU,
with low latency configure (32us) we saw CPU usage from top commmand
reduced from 20% to 0% on management core in testpmd).
Also with the new method we can remove compile option: I40E_ITR_INTERVAL
which is used to balance between low latency and low CPU usage previously.
Now we don't need it since we can reach both at same time.
Suggested-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
logic.
Rx path select logic: If IN_ORDER and merge-able are enabled will select
IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
disabled will select simple Rx path. Otherwise will select normal Rx
path.
Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
path. Otherwise will select default Tx path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add parameters for configuring VIRTIO_NET_F_MRG_RXBUF and
VIRTIO_F_IN_ORDER feature bits. If feature is disabled, also update
corresponding unsupported feature bit.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
DEV_RX_OFFLOAD_KEEP_CRC offload flag is added. PMDs that support
keeping CRC should advertise this offload capability.
DEV_RX_OFFLOAD_CRC_STRIP flag will remain one more release
default behavior in PMDs are to keep the CRC until this flag removed
Until DEV_RX_OFFLOAD_CRC_STRIP flag is removed:
- Setting both KEEP_CRC & CRC_STRIP is INVALID
- Setting only CRC_STRIP PMD should strip the CRC
- Setting only KEEP_CRC PMD should keep the CRC
- Not setting both PMD should keep the CRC
A helper function rte_eth_dev_is_keep_crc() has been added to be able to
change the no flag behavior with minimal changes in PMDs.
The PMDs that doesn't report the DEV_RX_OFFLOAD_KEEP_CRC offload can
remove rte_eth_dev_is_keep_crc() checks next release, related code
commented to help the maintenance task.
And DEV_RX_OFFLOAD_CRC_STRIP has been added to virtual drivers since
they don't use CRC at all, when an application requires this offload
virtual PMDs should not return error.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The optimal values of several transmission & reception related
parameters, such as burst sizes, descriptor ring sizes, and number
of queues, varies between different network interface devices. This
patch adds the values for the ixgbe PMD.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Multi-Packet Receive Queue is to receive multiple packets on a single large
buffer. The number of consumed strides in CQE is accumulated to keep track
of the current stride index. However, it is safer to directly use stride
index in CQE to avoid out-of-order situation which can possibly be caused
by introducing LRO in the future.
If Rx CQE compression is enabled, HW can be configured to store the stride
index in a mini-CQE but this will need newer version of library/driver.
Therefore, since this change, MPRQ is only supported with the newer
library/driver and Rx hash result is not supported if MPRQ is enabled along
with Rx CQE compression.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This feature is used to create a hole between HEADROOM
and actual data.Size of hole is specified in bytes as
module param to pmd
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Document the driver and device naming formats.
Changed the underscores alignment.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Add small amount of additional code, use consistent variable names
across code blocks, change the image to represent queues and
CPU cores intuitively. These help improve the eventdev library
documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Make the compiler switch name and document name consistent as ``ifc`` to
avoid confusion. Also rename the map file to standard name for meson
build in the process.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Introduce rte_flow skeleton and implement validate operation.
Parse and convert <item>, <action>, <attributes> into hardware
specification. Perform validation, including basic sanity tests
and underlying device's supported filter capability checks.
Currently add support for:
<item>: IPv4, IPv6, TCP, and UDP.
<action>: Drop, Queue, and Count.
Also add sanity checks to ensure filters are created at specified
index in LE-TCAM region. The index in LE-TCAM region indicates
the filter rule's priority with index 0 having the highest priority.
If no index is specified, filters are created at closest available
free index.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add template release notes for DPDK 18.08 with inline
comments and explanations of the various sections.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Fix grammar, spelling and formatting of DPDK 18.05 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Also, reference Bugzilla entry for keeping most current information in
one place.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Added known issue of rte_abort taking a long time
on FreeBSD due to recent memory subsystem rework.
Also, reference Bugzilla entry for keeping most
current information in one place.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Added contribution guideline for adding tags
when sending patches that have been raised on
Bugzilla
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Device querying and declaration has been postponed to 18.08.
Additionally, while working on the feature, some changes previously
announced won't be enacted.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This expands the ISA-L documentation to give more a detailed explanation
of the mapping between the compressdev API levels and PMD levels.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Deprecate rte_eal_mbuf_default_mempool_ops(), it shall be replaced by
rte_mbuf_best_mempool_ops().
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Most of this work was already done, but runtime config path is
considered to be part of public API because of high likelihood of
it being used by various tools in the DPDK ecosystem, and thus
requires a deprecation notice.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>