This is an ABI deprecation notice for DPDK 16.11 in librte_ether about
changes in rte_eth_dev and rte_eth_desc_lim structures.
As discussed in that thread:
http://dpdk.org/ml/archives/dev/2015-September/023603.html
Different NIC models depending on HW offload requested might impose
different requirements on packets to be TX-ed in terms of:
- Max number of fragments per packet allowed
- Max number of fragments per TSO segments
- The way pseudo-header checksum should be pre-calculated
- L3/L4 header fields filling
- etc.
MOTIVATION:
-----------
1) Some work cannot (and didn't should) be done in rte_eth_tx_burst.
However, this work is sometimes required, and now, it's an
application issue.
2) Different hardware may have different requirements for TX offloads,
other subset can be supported and so on.
3) Some parameters (eg. number of segments in ixgbe driver) may hung
device. These parameters may be vary for different devices.
For example i40e HW allows 8 fragments per packet, but that is after
TSO segmentation. While ixgbe has a 38-fragment pre-TSO limit.
4) Fields in packet may require different initialization (like eg. will
require pseudo-header checksum precalculation, sometimes in a
different way depending on packet type, and so on). Now application
needs to care about it.
5) Using additional API (rte_eth_tx_prep) before rte_eth_tx_burst let to
prepare packet burst in acceptable form for specific device.
6) Some additional checks may be done in debug mode keeping tx_burst
implementation clean.
PROPOSAL:
---------
To help user to deal with all these varieties we propose to:
1. Introduce rte_eth_tx_prep() function to do necessary preparations of
packet burst to be safely transmitted on device for desired HW
offloads (set/reset checksum field according to the hardware
requirements) and check HW constraints (number of segments per
packet, etc).
While the limitations and requirements may differ for devices, it
requires to extend rte_eth_dev structure with new function pointer
"tx_pkt_prep" which can be implemented in the driver to prepare and
verify packets, in devices specific way, before burst, what should to
prevent application to send malformed packets.
2. Also new fields will be introduced in rte_eth_desc_lim:
nb_seg_max and nb_mtu_seg_max, providing an information about max
segments in TSO and non-TSO packets acceptable by device.
This information is useful for application to not create/limit
malicious packet.
APPLICATION (CASE OF USE):
--------------------------
1) Application should to initialize burst of packets to send, set
required tx offload flags and required fields, like l2_len, l3_len,
l4_len, and tso_segsz
2) Application passes burst to the rte_eth_tx_prep to check conditions
required to send packets through the NIC.
3) The result of rte_eth_tx_prep can be used to send valid packets
and/or restore invalid if function fails.
eg.
for (i = 0; i < nb_pkts; i++) {
/* initialize or process packet */
bufs[i]->tso_segsz = 800;
bufs[i]->ol_flags = PKT_TX_TCP_SEG | PKT_TX_IPV4
| PKT_TX_IP_CKSUM;
bufs[i]->l2_len = sizeof(struct ether_hdr);
bufs[i]->l3_len = sizeof(struct ipv4_hdr);
bufs[i]->l4_len = sizeof(struct tcp_hdr);
}
/* Prepare burst of TX packets */
nb_prep = rte_eth_tx_prep(port, 0, bufs, nb_pkts);
if (nb_prep < nb_pkts) {
printf("tx_prep failed\n");
/* drop or restore invalid packets */
}
/* Send burst of TX packets */
nb_tx = rte_eth_tx_burst(port, 0, bufs, nb_prep);
/* Free any unsent packets. */
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The right name of ethdev should be dpdk_netdev. However:
1/ We are using rte_ prefix in the code and library names.
2/ The API uses rte_ethdev
That's why 16.11 will just have the rte_ prefix prepended to
the library filename as every other libraries.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Driver names for all the supported devices in DPDK do not have
a naming convention. Some are using a prefix, some are not
and some have long names. Driver names are used when creating
virtual devices, so it is useful to have consistency in the names.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove deprecation notice pertaining to introduction of new flow
types in favor of a more generic filtering infrastructure proposal.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Add new section on tested platforms and nics and OSes to the release notes.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Improve the wording of some text in the "new features" section of
the release notes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
When use i40e linux kernel driver as host driver and DPDK handler the i40e
VF, the promiscuous mode doesn't work in i40e VF. It is not supported by
DPDK i40e VF driver right now.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The following tools may be installed system-wide.
It may be cleaner and more convenient to find them with the same
dpdk- prefix (especially for autocompletion).
Moreover, the script dpdk_nic_bind.py deserves a new name because it is
not restricted to NICs and can be used for e.g. crypto.
These files are renamed:
pmdinfogen -> dpdk-pmdinfogen
pmdinfo.py -> dpdk-pmdinfo.py
dpdk_pdump -> dpdk-pdump
dpdk_proc_info -> dpdk-procinfo
dpdk_nic_bind.py -> dpdk-devbind.py
setup.sh -> dpdk-setup.sh
The tools pmdinfogen, pmdinfo.py and dpdk_pdump are new in 16.07.
The scripts dpdk_nic_bind.py and setup.sh may have been used with
previous releases by end users. That's why a symbolic link still
provide the old name in the installed tools directory.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Update the Sphinx installation instructions in the documentation
contributors guide to reflect the fact that in the 1.4+ versions
of Sphinx the ReadTheDocs theme must also be installed. Previously,
in version 1.3.x, it was installed by default.
Also change 'yum' to 'dnf' for package installations.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Fix warnings raised by Python Sphinx 1.4.5:
guides/sample_app_ug/ip_pipeline.rst:334:
WARNING: Could not lex literal_block as "ini". Highlighting skipped.
guides/sample_app_ug/l2_forward_real_virtual.rst:467:
WARNING: Could not lex literal_block as "c". Highlighting skipped.
guides/sample_app_ug/l3_forward.rst:293:
WARNING: Could not lex literal_block as "c". Highlighting skipped.
guides/sample_app_ug/vm_power_management.rst:162:
WARNING: Could not lex literal_block as "xml". Highlighting skipped.
These warnings arise from invalid syntax in code-block directives.
Fixes: f1e779ec5b ("doc: update ip pipeline app guide")
Fixes: d0dff9ba44 ("doc: sample application user guide")
Fixes: c75f4e6a7a ("doc: add vm power mgmt app")
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Flow Bifurcation is a mechanism which uses features of advanced
Ethernet devices to split traffic between queues. It provides
the capability to let the kernel driver and DPDK driver co-exist
and take advantage of both.
It is achieved by using SR-IOV and the NIC's advanced filtering. This
patch describes Flow Bifurcation and adds the user guide for ixgbe
and i40e NICs.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds an image of the Live Migration of a VM using vhost_user
on the host, test configuration.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch describes the procedure to be be followed to perform
Live Migration of a VM with Virtio PMD running on a host which
is running the vhost_user sample application (vhost-switch).
It includes sample host and VM scripts used in the procedure.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds an image of the Live Migration for
virtio and sriov test configuration.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch describes the procedure to be be followed
to perform Live Migration of a VM with Virtio and VF PMD's
using the bonding PMD.
It includes sample host and VM scripts used in the procedure,
and a sample switch configuration.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
- Fix vhost setup flags
- Add minor edits to improve readability and consistency
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The vhost feature negotiation only happens at virtio reset stage, say
when a virtio-net device is firstly initiated, or when DPDK virtio PMD
initiates. That means, if vhost APP restarts after the negotiation and
reconnects, the feature negotiation process will not be triggered again,
meaning the info is lost. To make reconnect work, QEMU simply saves
the negotiated features before the restart and restores it afterwards.
Therefore, the vhost supported features must be exactly the same before
and after the restart. For example, if TSO is disabled and then enabled,
nothing will work and undefined issues might happen.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The commit cb6696d220 ("drivers: update registration macro usage")
changes the name from virtio-user to virtio_user, because hyphen
cannot be used in a C symbol name. However, this commit does not
update the strings in docs and source code, which could lead to
failure to start this device as per the docs.
This patch updates related strings in the docs and source code.
Fixes: cb6696d220 ("drivers: update registration macro usage")
Reported-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The correct mailing list dev@dpdk.org, not dev@dpkg.org.
Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Added a missing note about dependencies on libpcap and
CONFIG_RTE_LIBRTE_PMD_PCAP flag that pdump tool has.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Fixed default socket path name "/var/run" to "/var/run/.dpdk" and
"$HOME" to "~/.dpdk".
Fixes: 278f945402 ("pdump: add new library for packet capture")
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since users of the pdump library and tool can chose to have their own
server and client paths, it is must for the pdump tool to use the same
server socket path that was used by primary application while
initializing packet capture framework by rte_pdump_init() or
rte_pdump_set_socket_dir() APIs.
To pass the socket path info to pdump tool a new optional command
line options "server-socket-path" and "client-socket-path" are added.
"client-socket-path" is also added, if the users want to have client
sockets in their own defined paths.
Updated pdump tool guide with the new changes.
Fixes: caa7028276 ("app/pdump: add tool for packet capturing")
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This file is going to disappear, remove the doxygen parts that reference
various drivers and remove it from the doxygen index.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Update l3fwd example usage and documentation with missing options.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch explains current virtio PMD Rx/Tx callbacks, to help understand
what's the difference, and how to enable the right ones.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch adds support for extended statistics for BNX2X PMD.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds support for extended statistics for QEDE PMD.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Update the 'High Performance of Small Packets on 40G NIC' section of the
Getting Started Guide (GSG) as the firmware version referenced for a NIC
using the i40e driver was version 4.2.5 which is no longer validated.
Instruct users to consult release notes for current validated firmware
versions.
Signed-off-by: Ian Stokes <ian.stokes@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
mmap the iomem range of the PCI device fails for kernels that
enabled CONFIG_IO_STRICT_DEVMEM option:
EAL: pci_map_resource():
cannot mmap(39, 0x7f1c51800000, 0x100000, 0x0):
Invalid argument (0xffffffffffffffff)
CONFIG_IO_STRICT_DEVMEM is introduced in Linux v4.5 and not enabled
by default:
Linux commit: 90a545e restrict /dev/mem to idle io memory ranges
As a workaround igb_uio can stop reserving PCI memory resources, from
kernel point of view iomem region looks like idle and mmap works
again. This matches uio_pci_generic usage.
With this update device iomem range is not protected against any
other kernel drivers or userspace access. But this shouldn't
be a problem for dpdk usage module since purpose of the igb_uio
module is to provide userspace access.
Fixes: af75078fec ("first public release")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
KASUMI PMD only supports bit-level cipher operations
when destination buffer is different from the source
(out of place operations). This commit adds a check
in the code to prevent the user from trying to perform
in-place bit-level ciphering.
Fixes: 2773c86d06 ("crypto/kasumi: add driver for KASUMI library")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Sphynx reports this error:
doc/guides/prog_guide/dev_kit_build_system.rst:337: WARNING:
Pygments lexer name u'C' is not known
Fixes: 737ddf3fb ("doc: add prog guide section documenting pmdinfo script")
Reported-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The recipe rte.hostapp.mk does not build in hostapp/ anymore.
Fixes: 98b0fdb0ff ("pmdinfogen: add buildtools and pmdinfogen utility")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Information on pmdinfogen may be useful to 3rd party driver developers.
Include documentation on what it does
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Mainly on updating vhost-user part: we now support client mode.
Also refine some words, and add a bit more explanation.
And made an emphatic statement that you are suggested to use vhost-user
instead of vhost-cuse, because we have enhanced vhost-user a lot since
v2.2 (Actually, I doubt there are any people still using vhost-cuse)
[John McNamara: rewords, better formats]
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
For all drivers that currently implement xstats, the id field in the
rte_eth_stats_name structure equals the entry's array index. This
patch eliminates the redundant id field as a direct index lookup is
faster than a search for the matching id field.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
The mempool_count and mempool_free_count behaved contrary to what their
names suggested. The free_count function actually returned the number of
elements that were allocated from the pool, not the number unallocated as
the name implied.
Fix this by introducing two new functions to replace the old ones,
* rte_mempool_avail_count to replace rte_mempool_count
* rte_mempool_in_use_count to replace rte_mempool_free_count
In this patch, the new functions are added, and the old ones are marked
as deprecated. All apps and examples that use the old functions are
updated to use the new functions.
Fixes: af75078fec ("first public release")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The mempool cache is only available to EAL threads as a per-lcore
resource. Change this so that the user can create and provide their own
cache on mempool get and put operations. This works with non-EAL threads
too. This commit introduces the new API calls:
rte_mempool_cache_create(size, socket_id)
rte_mempool_cache_free(cache)
rte_mempool_cache_flush(cache, mp)
rte_mempool_default_cache(mp, lcore_id)
Changes the API calls:
rte_mempool_generic_put(mp, obj_table, n, cache, flags)
rte_mempool_generic_get(mp, obj_table, n, cache, flags)
The cache-oblivious API calls use the per-lcore default local cache.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This commit introduces the API calls:
rte_mempool_generic_put(mp, obj_table, n, is_mp)
rte_mempool_generic_get(mp, obj_table, n, is_mc)
Deprecates the API calls:
rte_mempool_mp_put_bulk(mp, obj_table, n)
rte_mempool_sp_put_bulk(mp, obj_table, n)
rte_mempool_mp_put(mp, obj)
rte_mempool_sp_put(mp, obj)
rte_mempool_mc_get_bulk(mp, obj_table, n)
rte_mempool_sc_get_bulk(mp, obj_table, n)
rte_mempool_mc_get(mp, obj_p)
rte_mempool_sc_get(mp, obj_p)
We also check cookies in one place now.
Signed-off-by: Lazaros Koromilas <l@nofutznetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The standard Virtual Ethernet Bridge(VEB) definition in 1Qbg is a bridge
which has an uplink port to the outside world (maybe another bridge), but
a "floating" VEB is a special VEB without an uplink port to the outside.
Instead, traffic can be sent from one VF to another using the floating
VEB - even when the physical link on the NIC port is down.
This patch adds floating VEB options in the devargs for i40e driver.
Using these parameters, applications can decide whether to use legacy
VEB/VEPA or a floating VEB.
To enable this feature, the user should pass a devargs parameter to the
EAL, for example "-w 84:00.0,enable_floating_veb=1", to control whether
the PMD will to use the floating VEB feature or not.
Once the floating VEB feature is enabled, all the VFs created by
this PF device are connected to the floating VEB.
NOTE: The floating VEB functionality requires a NIC firmware version
of 5.0 or greater.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Provide an update MTU callback. The function returns -ENOTSUP
if Rx scatter is enabled. Updating the MTU to be greater than
the value configured via the Cisco CIMC/UCSM management interface
is allowed provided it is still less than the maximum egress packet
size allowed by the NIC minus the size of the L2 header.
Signed-off-by: John Daley <johndale@cisco.com>
The ixgbe base driver was updated to version
cid-10g-shared-code.2016.04.12
The changes include:
Added sgmii link for X550.
Added mac link setup for X550a SFP and SFP+.
Added KR support for X550em_a.
Added new phy definitions for M88E1500.
Added support for the VLVF to be bypassed when adding/removing
a VFTA entry.
Added X550a flow control auto negotiation support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This feature enables the TX burst function to emit up to 5 packets using
only two work queue entries (WQEs) on devices that support it. Saves PCI
bandwidth and improves performance.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Implement send inline feature which copies packet data directly into
work queue entries (WQEs) for improved latency. The maximum packet
size and the minimum number of Tx queues to qualify for inline send
are user-configurable.
This feature is effective when HW causes a performance bottleneck.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Mini (compressed) completion queue entries (CQEs) are returned by the
NIC when PCI back pressure is detected, in which case the first CQE64
contains common packet information followed by a number of CQE8
providing the rest, followed by a matching number of empty CQE64
entries to be used by software for decompression.
Before decompression:
0 1 2 6 7 8
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 |
|-------| |---------| |-------| |-------| |-------| |-------|
| ..... | | cqe8[0] | | | . | | | | | ..... |
| ..... | | cqe8[1] | | | . | | | | | ..... |
| ..... | | ....... | | | . | | | | | ..... |
| ..... | | cqe8[7] | | | | | | | | ..... |
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
After decompression:
0 1 ... 8
+-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 |
|-------| |-------| |-------|
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | | ..... |
+-------+ +-------+ +-------+
This patch does not perform the entire decompression step as it would be
really expensive, instead the first CQE64 is consumed and an internal
context is maintained to interpret the following CQE8 entries directly.
Intermediate empty CQE64 entries are handed back to HW without further
processing.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
The latest version of Mellanox OFED exposes hardware definitions necessary
to implement data path operation bypassing Verbs. Update the minimum
version requirement to MLNX_OFED >= 3.3 and clean up compatibility checks
for previous releases.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Inline TX will be fully managed by the PMD after Verbs is bypassed in the
data path. Remove the current code until then.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no scatter/gather support anymore, CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N
has no purpose and can be removed.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Removing dependency on nfp_uio kernel module. The igb_uio
kernel modules can be used instead.
Fixes: 80bc1752f1 ("nfp: add guide")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
In current i40e codebase, if single VLAN header is added in a packet,
it's treated as inner VLAN. Generally, a single VLAN header is
treated as the outer VLAN header, so update the driver behaviour
appropriately.
Fixes: 19b16e2f64 ("ethdev: add vlan type when setting ether type")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
For performance reasons, this patch uses 2 VIC RQs per RQ presented to
DPDK.
The VIC requires that each descriptor be marked as either a start of
packet (SOP) descriptor or a non-SOP descriptor. A one RQ solution
requires skipping descriptors when receiving small packets and results
in bad performance when receiving many small packets.
The 2 RQ solution makes use of the VIC feature that allows a receive
on primary queue to 'spill over' into another queue if the receive is
too large to fit in the buffer assigned to the descriptor on the
primary queue. This means that there is no skipping of descriptors
when receiving small packets and results in much better performance.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
- Add device id to the PCI table
- Add polling for the slowpath events for CMT mode device
- Add prerequisites to allow 100g mode
* Min number of queues needed is 2
* Only even number of queues are allowed
- Update documentation
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Add support for setting hash configuration based on adapter capability
and update corresponding NIC documentation.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Intel stopped supporting Match Interface, remove reference to it in the
documentation.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Updated doc/guides/nics/overview.rst, doc/guides/nics/thunderx.rst
and release notes
Changed "*" to "P" in overview.rst to capture the partially supported
feature as "*" creating alignment issues with Sphinx table
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds the initial skeleton for bnxt driver along with the
nic guide, and ties the driver into the build system.
At this point, the driver simply fails init.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
[Release Note Addition]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Update the documentation and comments with brief details on the base
code version included in this release.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch enables configuring MTU for i40e.
Since changing MTU needs to reconfigure queue, the port must be
stopped before configuring MTU.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Previously, there was a known issue "On Intel® 40G Ethernet
Controller stopping the port does not really down the port link."
There were two reasons why the port was always kept up.
1. Old firmware versions had issues when "Set PHY config command"
was used on 40G NICs.
2. The kernel i40e driver didn't call "Set PHY config command" when
ifconfig up/down was used, it assumes the link is always up. But
in DPDK, ports are forced down when an applications quits. So if
the port is then switched to being controlled by kernel the driver,
the port can not be brought up through "ifconfig <ethx> up".
This patch fixes this issue by adding in "Set PHY config command"
into our driver. This is now possible because with newer firmware
there is no longer a problem using this command.
With this fix, after DPDK quit, if the port is switched to being used
by the kernel driver, "ethtool -s <ethx> autoneg on" can be used to
turn on the auto negotiation, and then port can be brought up through
"ifconfig <ethx> up".
NOTE: requires kernel i40e driver version >= 1.4.X
Fixes: 2f1e228174 ("i40e: skip link control as firmware workaround")
Fixes: 16c979f9ad ("i40e: disable setting of PHY configuration")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch introduced scalable multi-writer Cuckoo Hash insertion
based on a split Cuckoo Search and Move operation using Intel
TSX. It can do scalable hash insertion with 22 cores with little
performance loss and negligible TSX abortion rate.
* Added an extra rte_hash flag definition to switch default single writer
Cuckoo Hash behavior to multiwriter.
- If HTM is available, it would use hardware feature for concurrency.
- If HTM is not available, it would fall back to spinlock.
* Created a rte_cuckoo_hash_x86.h file to hold all x86-arch related
cuckoo_hash functions. And rte_cuckoo_hash.c uses compile time flag to
select x86 file or other platform-specific implementations. While HTM check
is still done at runtime (same idea with
RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT)
* Moved rte_hash private struct definitions to rte_cuckoo_hash.h, to allow
rte_cuckoo_hash_x86.h or future platform dependent functions to include.
* Following new functions are created for consistent names when new platform
TM support are added.
- rte_hash_cuckoo_move_insert_mw_tm: do insertion with bucket movement.
- rte_hash_cuckoo_insert_mw_tm: do insertion without bucket movement.
* One extra multi-writer test case is added.
Signed-off-by: Wei Shen <wei1.shen@intel.com>
Signed-off-by: Sameh Gobriel <sameh.gobriel@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.
The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.
This patch also adds a set of default ops (function callbacks) based
on rte_ring.
Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
NSH packet can be recognized by Intel X710/XL710 series.
This patch enables the new packet type.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
- added VXLAN, GENEVE and NVGRE tunnel flow types
- added PORT flow type for accounting physical/virtual
port or channel number in flow creation
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Add a new virtual device named virtio-user, which can be used just like
eth_ring, eth_null, etc. To reuse the code of original virtio, we do
some adjustment in virtio_ethdev.c, such as remove key _static_ of
eth_virtio_dev_init() so that it can be reused in virtual device; and
we add some check to make sure it will not crash.
Configured parameters include:
- queues (optional, 1 by default), number of queue pairs, multi-queue
not supported for now.
- cq (optional, 0 by default), not supported for now.
- mac (optional), random value will be given if not specified.
- queue_size (optional, 256 by default), size of virtqueues.
- path (madatory), path of vhost user.
When enable CONFIG_RTE_VIRTIO_USER (enabled by default), the compiled
library can be used in both VM and container environment.
Examples:
path_vhost=<path_to_vhost_user> # use vhost-user as a backend
sudo ./examples/l2fwd/build/l2fwd -c 0x100000 -n 4 \
--socket-mem 0,1024 --no-pci --file-prefix=l2fwd \
--vdev=virtio-user0,mac=00:01:02:03:04:05,path=$path_vhost -- -p 0x1
Known issues:
- Control queue and multi-queue are not supported yet.
- Cannot work with --huge-unlink.
- Cannot work with no-huge.
- Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8)
hugepages.
- Root privilege is a must (mainly becase of sorting hugepages according
to physical address).
- Applications should not use file name like HUGEFILE_FMT ("%smap_%d").
- Cannot work with vhost-net backend.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
All other DPDK PMDs doesn't support concurrent receiving or sending
packets to the same queue. The upper application should deal with
this, normally through queue and core bindings.
Due to historical reason, vhost internally supports concurrent lockless
enqueuing packets to the same virtio queue through costly cmpset operation.
This patch removes this internal lockless implementation and should improve
performance a bit.
Luckily DPDK OVS doesn't rely on this behavior.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Allow reconnecting on failure by default when:
- DPDK app starts first and QEMU (as the server) is not started yet.
Without reconnecting, DPDK app would simply fail on vhost-user
registration.
- QEMU restarts, say due to OS reboot.
Without reconnecting, you can't re-establish the connection without
restarting DPDK app.
This patch make it work well for both above cases. It simply creates
a new thread, and keep trying calling "connect()", until it succeeds.
The reconnect could be disabled when RTE_VHOST_USER_NO_RECONNECT flag
is set.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
is set. The flags would also allow future extensions without
breaking the API (again).
The rest is straingfoward then: allocate a unix socket, and
bind/listen for server, connect for client.
This extension is for vhost-user only, therefore we simply quit
and report error when any flags are given for vhost-cuse.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With all the previous prepare works, we are just one step away from
the final ABI refactoring. That is, to change current API to let them
stick to vid instead of the old virtio_net dev.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
The new API rte_vhost_avail_entries() is actually a rename of
rte_vring_available_entries(), with the "vring" to "vhost" name
change to keep the consistency of other vhost exported APIs.
This change could let us avoid the dependency of "virtio_net"
struct, to prepare for the ABI refactoring.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
1. add KNI support to the IP Pipeline sample Application
2. some bug fix
3. update doc
4. add config file with two KNI interfaces connected using
a Linux kernel bridge
Signed-off-by: WeiJie Zhuang <zhuangwj@gmail.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
add KNI port type to the packet framework
Signed-off-by: WeiJie Zhuang <zhuangwj@gmail.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Underlying libsso_snow3g library now supports bit-level
operations, so PMD has been updated to allow them.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
With the library update, the way to compile the library
has changed, so documentation reflects this change.
Also, the patch to fix the compilation issues present with gcc > 5.0
has been removed, as the issues have been fixed in the library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library that SNOW3G PMD uses has been updated,
so now it is called libsso_snow3g. Also, the path to the library
has been renamed to reflect this changes (now called LIBSSO_SNOW3G_PATH).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Some crypto PMDs that support symmetric crypto were not marked
as supported in the supported feature flags table.
Fixes: 2373c0661b ("doc: add cryptodevs guide overview")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The new pdump tool is added for packet capturing on dpdk.
This tool runs as secondary process by default.
Tool facilitates the command line options like
port, device_id, queue which user should pass on
to the tool to request the packet capture on those devices.
Tool creates the rte ring, mempool and pcap vdev and
calls the enable API of the pdump library with port/device_id,
queue, ring and mempool as arguments to enable the packet
capture on specific devices and gets the packets from the
primary process over the ring. Once the packets are
received, those packets will be send to the pcap vdev.
Tool can be terminated by using ctrl+c(SIGINT) upon which tool
calls the disable API of the pdump library to disable the packet capture
and dequeues the rest of the packets from the ring and sends them on
to the pcap vdev, then after releases all allocated resources.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The librte_pdump library provides a framework for
packet capturing in dpdk. The library provides set of
APIs to initialize the packet capture framework, to
enable or disable the packet capture, and to uninitialize
it.
The librte_pdump library works on a client/server model.
The server is responsible for enabling or disabling the
packet capture and the clients are responsible
for requesting the enabling or disabling of the packet
capture.
Enabling APIs are supported with port, queue, ring and
mempool parameters. Applications should pass on this information
to get the packets from the dpdk ports.
For enabling requests from applications, library creates the client
request containing the mempool, ring, port and queue information and
sends the request to the server. After receiving the request, server
registers the Rx and Tx callbacks for all the port and queues.
After the callbacks registration, registered callbacks will get the
Rx and Tx packets. Packets then will be copied to the new mbufs that
are allocated from the user passed mempool. These new mbufs then will
be enqueued to the application passed ring. Applications need to dequeue
the mbufs from the rings and direct them to the devices like
pcap vdev for viewing the packets outside of the dpdk
using the packet capture tools.
For disabling requests, library creates the client request containing
the port and queue information and sends the request to the server.
After receiving the request, server removes the Rx and Tx callback
for all the port and queues.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The new fields nb_rx_queues and nb_tx_queues are added to the
rte_eth_dev_info structure.
Changes to API rte_eth_dev_info_get() are done to update these new fields
to the rte_eth_dev_info object.
Release notes is updated with the changes.
The librte_pdump library needs to register Rx and Tx callbacks for all
the nb_rx_queues and nb_tx_queues, when application wants to capture the
packets on all the software configured number of Rx and Tx queues of the
device. So far there is no support to get nb_rx_queues and nb_tx_queues
information from the ethdev library. Hence these changes are introduced.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Adds and documents new callbacks that allow transitions to core
states other than dead to be reported to applications.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the test-pmd
and proc_info applications to use the new xstats API, and removes
deprecated code associated with the old API.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the xstats
functions to instead use a numeric identifier rather than a string, and
adds the ability to retrieve identifier-to-string mappings.
Signed-off-by: Remy Horton <remy.horton@intel.com>
This patch enables configurable tx_first burst number.
Use "start tx_first (burst_num)" to specify how many bursts of packets to
be sent before forwarding start, or "start tx_first" like before for the
default 1 burst send.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds retry option in testpmd to prevent most packet losses.
It can be enabled by "set fwd <mode> retry". All modes except rxonly
support this option.
Adding retry mechanism expands test case coverage to support scenarios
where packet loss affects test results.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch docs the issue on EAL argument that the last EAL
argument is replaced by program name in argv[].
Reported-by: Ziye Yang <ziye.yang@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch is used to add the class_id (class_code,
subclass_code, programming_interface) support for
pci_device probe. With this patch, it will be
flexible for users to probe a class of devices
by class_id.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
The commit 66819e6 has introduced a dependency on libarchive to be able
to use some tar resources in the unit tests.
It is now an optional dependency because some systems do not have it
installed.
If CONFIG_RTE_APP_TEST_RESOURCE_TAR is disabled, the PCI test will not
be run. When a "configure" script will be integrated, the libarchive
availability could be checked to automatically enable the option.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
The log history uses rte_mempool. In order to remove the mempool
dependency in EAL (and improve the build), this feature is deprecated.
The ABI is kept but the behaviour is now voided because it seems this
function was not used. The history can be read from syslog.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Fix broken console directive in the ABI validator section of the
ABI versioning docs.
Fixes: f1ef9794f9 ("doc: add ABI guidelines")
Signed-off-by: John McNamara <john.mcnamara@intel.com>
A previous patch modified the CLIs witout updating the examples.
Fixes: 53b2bb9b7e ("app/testpmd: new flow director commands")
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Port needs to be stopped and then closed before it can be detached,
but the documentation was only saying to close the port.
Also, both sections for port detaching and attaching has been reformatted
slightly, to show clearly how to use the commands.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
"port config all txqflags <value>" allows for
specifying txq_flags value in command line.
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
"port config all scatter on|off" allows for
controlling rxmode.enable_scatter in command line.
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This parameter allows for controlling rxmode.enable_scatter
which in turn allow for multi-segment packet receive tests.
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The routing pipeline registers a callback function with the nic ports and
this function is invoked for updating the routing entries (corresponding to
local host and directly attached network) tables whenever the nic ports
change their states (up/down).
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
As a result of tracking, output ports of routing pipelines are linked with
physical nic ports (potentially through other pipeline instances).
Thus, the mac addresses of the NIC ports are assigned to routing pipeline
out ports which are connected to them and are further used in routing table
entries instead of hardcoded default values.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
This patch enables rss (receive side scaling) per network interface
through the configuration file. The user can specify following
parameters in LINK section for enabling the rss feature - rss_qs,
rss_proto_ipv4, rss_proto_ipv6 and ip_proto_l2.
The "rss_qs" is mandatory parameter which indicates the queues to be
used for rss, while rest of the parameters are optional. When optional
parameters are not provided in the configuration file, default setting
(ETH_RSS_IPV4 | ETH_RSS_IPV6) is assumed for "rss_hf" field of the
rss_conf structure.
For example, following configuration can be applied for using the rss
on port 0 of the network interface;
[PIPELINE0]
type = MASTER
core = 0
[LINK0]
rss_qs = 0 1
[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ0.1 RXQ1.0
pktq_out = TXQ0.0 TXQ1.0 TXQ0.1
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Corrected a typo in application name.
Corrected authentication algorithm to fit the sample 16-byte
authentication key.
Fixes: ba7b86b1 ("doc: add l2fwd-crypto sample app guide")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
This patch provides counter mode support to AES-NI multi-buffer library.
The following cipher algorithm is enabled:
- RTE_CRYPTO_CIPHER_AES_CTR
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added possibility for AES to work in counter mode
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
AES-NI MB PMD supports 128, 192 and 256-bit keys,
not 128, 256 and 512-bit keys.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Commit 4768c475 added a pointer to the memzone in rte_ring. However,
all memzones are residing in local mem_config, therefore accessing
the memzone pointer inside the guest in an IVSHMEM-shared rte_ring
will cause segmentation fault. This issue is unlikely to ever get
fixed, as this would require lots of changes for very little benefit,
therefore we're documenting this limitation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Added PEP8 to the DPDK Coding Style guidelines to cover Python
contributions to DPDK.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Remove the deprecation notice and add an entry in the release note
for the changes in mempool allocation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The rte_pktmbuf_detach() function should decrease refcnt on a direct
buffer as stated in doc/guides/prog_guide/mbuf_lib.rst:
"whenever the indirect buffer is detached, the reference counter on the
direct buffer is decremented."
Signed-off-by: Hiroyuki Mikita <h.mikita89@gmail.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The rte_mempool structure is changed, which will cause an ABI change
for this structure. Providing backward compat is not reasonable
here as this structure is used in multiple defines/inlines.
Allow mempool cache support to be dynamic depending on if the
mempool being created needs cache support. Saves about 1.5M of
memory used by the rte_mempool structure.
Allocating small mempools which do not require cache can consume
larges amounts of memory if you have a number of these mempools.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
We don't want to have this instructions in the generated docs, so use
comments. It's also less confusing for people adding entries in the
documentation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The rte_eth_dev_count() function will never return a value greater
than RTE_MAX_ETHPORTS, so that checking is useless.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
It's reported that it's has not been working for a long while. And due
to it's complex, it's better to redesign it than to fix it to make it
work again.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
The Qlogic Everest Driver for Ethernet(QEDE) Poll Mode Driver(PMD) is
the DPDK specific module for QLogic FastLinQ QL4xxxx 25G/40G CNA family
of adapters as well as their virtual functions (VF) in SR-IOV context.
This patch adds QEDE PMD, which interacts with base driver and
initialises the HW.
This patch content also includes:
- eth_dev_ops callbacks
- Rx/Tx support for the driver
- link default configuration
- change link property
- link up/down/update notifications
- vlan offload and filtering capability
- device/function/port statistics
- qede nic guide and updated overview.rst
Note that the follow on commits contain the code for the features mentioned
in documents but not implemented in this patch.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The macro RTE_VERIFY always checks a condition.
It is optimized with "unlikely" hint.
While this macro is well suited for test applications, it is preferred
in libraries and examples to enable such check in debug mode.
That's why the macro RTE_ASSERT is introduced to call RTE_VERIFY only
if built with debug logs enabled.
A lot of assert macros were duplicated and enabled with a specific flag.
Removing these #ifdef allows to test these code branches more easily
and avoid dead code pitfalls.
The ENA_ASSERT is kept (in debug mode only) because it has more
parameters to log.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The default was to compile every logs (including debug) and set
the default level to debug.
As some debug logs may hurt performance, a notice is added and the
default level is now info.
In order to enable debug logs, they must be compiled with
RTE_LOG_LEVEL=RTE_LOG_DEBUG and enabled at runtime with --log-level=8.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Changed symbol on NIC overview table from X to Y to help
clarify the indicated features are supported. The X caused
confusion for some readers.
Also, added * character to indicate partially supported
features. This can be used in the future to direct the reader
to more specific details in the individual NIC guides.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
The function rte_hash_lookup_multi() was renamed rte_hash_lookup_bulk()
in DPDK 1.4 and was kept as an undocumented alias.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some statistics were deprecated since release 2.1 (49f386542a).
The last deprecated counter to be used was imcasts.
The VF loopback statistics are also removed as they are used only
in igb and duplicated in extended statistics.
The new counters should be added to extended statistics.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The driver i40e was using a specific PCI config before the release 16.04.
Since 16.04, it is always enabled in i40e (commit 56465cfaf).
The API has been deprecated in the commit 68f7759382.
The igb_uio implementation has been deprecated in commit b7cf8e155.
The config helper - through igb_uio sysfs entries - is now removed.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Support of PCAP file has been added to rte_port in release 16.04
as NEXT_ABI. It is in the standard ABI of the release 16.07.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The git messages have three parts:
1/ the headline
2/ the explanations
3/ the footer tags
The headline helps to quickly browse an history or catch instantly the
purpose of a commit. Making it short with some consistent wording
allows to easily parse it or match some patterns.
The explanations must give some keys like the reason of the change.
Nothing can be automatically checked for this part, except line length.
The footer contains some tags to find the origin of a bug or who
was working on it.
This script is doing some basic checks mostly on parts 1 and 3.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new section on tested platforms and nics to the release notes.
Signed-off-by: Qian Xu <qian.q.xu@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Following discussions with Jan, here is a deprecation notice to prepare for
hotplug and rte_device changes to come in 16.07.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
We currently exposed way too many fields (or even structures) than
necessary. For example, vhost_virtqueue struct should NOT be exposed
to user at all: application just need to tell the right queue id to
locate a specific queue, and that's all. Instead, the structure should
be defined in an internal header file. With that, we could do any changes
to it we want, without worrying about that we may offense the painful
ABI rules.
Similar changes could be done to virtio_net struct as well, just exposing
very few fields that are necessary and moving all others to an internal
structure.
Huawei then suggested a more radical yet much cleaner one: just exposing
a virtio_net handle to application, just like the way kernel exposes an
fd to user for locating a specific file, and exposing some new functions
to access those old fields, such as flags, virt_qp_nb.
With this change, we're likely to be free from ABI violations forever
(well, except when we have to extend the virtio_net_device_ops struct).
For example, following nice cleanup would not be a blocking one then:
http://dpdk.org/ml/archives/dev/2016-February/033528.html
Suggested-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add a programmer's guide section for cryptodev library.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Details supported device features and algorithms for each crypto PMD.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds a notice that the API for the xstats
functionality will be modified in the 16.07 release, with
no backwards compatibility planned as it would require
code duplication in each PMD that supports xstats.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Acked-by: David Harton <dharton@cisco.com>
Acked-by: Maryam Tahhan <maryam.tahhan@intel.com>
Several new fields will be added to structure rte_port_source_params for
source port enhancement with pcap file reading support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Enable "other kdrv" because both NICs require kernel support to work.
Unicast and multicast MAC filters are also enabled as both address types can
be filtered on through the MAC add/remove/set callbacks.
Fixes: e86b85ca75 ("doc: fill nics features matrix for mlx")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Fixes: 83a4a15404 ("doc: fill nics features matrix for e1000/igb and ixgbe")
Reported-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch updates the titles in the multiprocess memory image
to read "Primary Process" and "Secondary Process" instead of
"DPDK Server Process" and "Customer Client Process".
The rest of the image has been converted from PNG to SVG.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Explain how to create/initialize virtual crypto PMDs,
through command line and within an application.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
A new process to request the libsso library required by the SNOW3G PMD
has been put in place, through a website, replacing the previous email method.
This commit updates the SNOW3G documentation, to reflect this change.
Since the library does not support newer gcc versions, the documentation
also contains a patch to make the library work with gcc > 5.0.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Change rte_hash*_create() functions to return NULL and set rte_errno to
EEXIST when the object name already exists. This is the behavior
described in the API documentation in the header file.
These functions were returning a pointer to the existing object in that
case, but it is a problem as the caller did not know if the object had
to be freed or not.
Doing this change also makes the hash API more consistent with the other
APIs (mempool, rings, ...).
Fixes: 916e4f4f4e ("memory: fix for multi process support")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Change rte_lpm*_create() functions to return NULL and set rte_errno to
EEXIST when the object name already exists. This is the behavior
described in the API documentation in the header file.
These functions were returning a pointer to the existing object in that
case, but it is a problem as the caller did not know if the object had
to be freed or not.
Doing this change also makes the lpm API more consistent with the other
APIs (mempool, rings, ...).
Fixes: 916e4f4f4e ("memory: fix for multi process support")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The issue is the VF's link speed kept as 10G and status always was up.
It did not change even the physical link's status changed.
This patch fixes this issue to make VF's link info consistent with
physical link.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Note: virtio is a para-virtualization device, which indicates that its
features depend on not only front end but also back end. Here by X, we
just mean the feature is supported in front end.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
The purpose of this patch is used to add a new field
"class" in rte_pci_id structure. The new class field includes
class_id, subcalss_id, programming interface of a pci device.
With this field, we can identify pci device by its class info,
which can be more flexible instead of probing the device by
vendor_id OR device_id OR subvendor_id OR subdevice_id.
For example, we can probe all nvme devices by class field, which
can be quite convenient.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Add a deprecation notice for coming changes in mempool for 16.07.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: David Hunt <david.hunt@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Announce the ABI breakage due to addition of external mempool
manager functionality which requires changes to rte_mempool
structure.
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Deprecation notice for 16.04 for changes to occur in
release 16.07 for rte_mempool memory reduction.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The link speed configuration is now done with bitmaps so 100G speed
requires only a new bit flag.
The actual link speed is a number so its size must be increased from
16-bit to 32-bit.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tested-by: Matej Vido <vido@cesnet.cz>
This patch redesigns the API to set the link speed/s configuration
of an ethernet port. Specifically:
- it allows to define a set of advertised speeds for
auto-negociation.
- it allows to disable link auto-negociation (single fixed speed).
- default: auto-negociate all supported speeds.
A flag autoneg in struct rte_eth_link indicates if link speed was a
result of auto-negociation or was fixed by configuration.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The speed capabilities of a device can be retrieved with
rte_eth_dev_info_get().
The new field speed_capa is initialized in the drivers without
taking care of device characteristics in this patch.
When the capabilities of a driver are accurate, the table in
overview.rst must be filled.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Hash library used a function pointer to choose a different
key compare function, depending on the key size.
As a result, multiple processes could not use the same hash table,
as the function addresses vary from one process to another.
Instead, a jump table is used, so each process has its own
function addresses, accessing this table with an index stored
in the hash table (note that using a custom key compare function
is not supported in multi-process mode).
Fixes: 48a3991196 ("hash: replace with cuckoo hash implementation")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds out-of-place operations to qat symmetric crypto PMD,
i.e. the result of the operation can be written to the destination buffer
instead of overwriting the source buffer as done in "in-place" operation.
Both buffers can be of different sizes.
Previously the qat PMD assumed that m_src and m_dst in rte_crypto_sym_op
were identical.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Previously, vector driver is not the first (default) choice for i40e,
as it cannot fill packet type info for l3fwd to work well. Now there
is an option for l3fwd to analysis packet type softly. So enable it
by default.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As a example to use ptype info, l3fwd needs firstly to use
rte_eth_dev_get_supported_ptypes() API to check if device and/or
its PMD driver will parse and fill the needed packet type; if not,
use the newly added option, --parse-ptype, to analyze it in the
callback softly.
As the mode of EXACT_MATCH uses the 5 tuples to caculate hash, so
we narrow down its scope to:
a. ip packets with no extensions, and
b. L4 payload should be either tcp or udp.
Note: this patch does not completely solve the issue, "cannot run
l3fwd on virtio or other devices", because hw_ip_checksum may be
not supported by the devices. Currently we can:
a. remove this requirements, or
b. wait for virtio front end (pmd) to support it.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Ixgbe HW supports 128 TX queues. However, the full 128 queues are only
available in VT and DCB mode. In normal default "none" mode (VT/DCB off)
the maximum number of available queues is only 64.
The driver doesn't check the mode when reporting the available
number of queues, allowing more that 64 queues to be used in all cases.
If a queue no. >=64 is used in default mode, the TX packets will be dropped
silently.
This change adds a check to forbid using a queue number larger than 64
during device configuration (in default mode), so that the problem is
reported as early as possible.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch extends the commands for changing flow director filter's input
set. It adds vlan as a possible filter input field.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select vlan id as part of
filter's input set and program the filter rule with vlan id.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends commands for changing a flow director filter's input
set. It adds tos, protocol and ttl as filter's input fields, and removes
the words selection from flex payloads.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds RTE_ETH_INPUT_SET_L3_IP4_TTL,
RTE_ETH_INPUT_SET_L3_IP6_HOP_LIMITS input field types and extends
struct rte_eth_ipv4_flow and rte_eth_ipv6_flow to support filtering
by tos, protocol and ttl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
VLAN insertion can be done in hardware when supported in Verbs. A software
fallback is provided otherwise. The software implementation is also used
when multi-packet send is enabled on a queue, as both features are mutually
exclusive.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Environment variable MLX5_PMD_ENABLE_PADDING enables HW packet padding
in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer
bytes are written to the PCI bus. Enabling padding makes such packets
aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve
performance by 10%.
This is disabled by default since this can also decrease performance for
unaligned packet sizes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
fix packet padding macro check
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Secondary processes are expected to use queues and other resources
allocated by the primary, however Verbs resources can only be shared
between processes when inherited through fork().
This limitation can be worked around for TX by configuring separate queues
from secondary processes.
Signed-off-by: Or Ami <ora@mellanox.com>
Add driver functions to set link state up or down.
Burst functions are updated to make sure applications cannot attempt to
send/receive after link is brought down.
Signed-off-by: Or Ami <ora@mellanox.com>
When Linux PF and DPDK VF are used for i40e PMD, when a PF reset occurs,
an interrupt will go via adminq event to inform the VF of the reset.
A callback mechanism is introduced for the VF to allow it to invoke a
registered callback when PF reset happens.
Users can register a callback for this interrupt event using:
rte_eth_dev_callback_register(portid,
RTE_ETH_EVENT_INTR_RESET,
reset_event_callback,
arg);
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch implements PQoS as a sample application.
PQoS allows management of the CPUs last level cache,
which can be useful for DPDK to ensure quality of service.
The sample app links against the existing 01.org PQoS library
(https://github.com/01org/intel-cmt-cat).
White paper demonstrating example use case "Increasing Platform Determinism
with Platform Quality of Service for the Data Plane Development Kit"
(http://www.intel.com/content/www/us/en/communications/increasing-platform-determinism-pqos-dpdk-white-paper.html)
Signed-off-by: Wojciech Andralojc <wojciechx.andralojc@intel.com>
Signed-off-by: Tomasz Kantecki <tomasz.kantecki@intel.com>
Signed-off-by: Marcel D Cornu <marcel.d.cornu@intel.com>
This patch updates the release notes with the features that
have been added to ip_pipeline application.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
PMD was of type PMD_VDEV which means that PCI device is not recognised
automatically during EAL initialization, but it has to be created by
EAL option --vdev.
Now, PMD is of type PMD_PDEV which means that PCI device is probed
and recognised during EAL initialization automatically.
Path to szedata2 device file is matched with device and the count
of available RX and TX DMA channels is found out during device
initialization.
Initialization, starting and stopping of queues is changed to better
correspond with Ethernet device API model. Function callbacks
(rx|tx)_queue_(start|stop) are added. Unnecessary items are removed
from ethernet device private data structure.
Signed-off-by: Matej Vido <vido@cesnet.cz>
Change rxq_cq_to_ol_flags() to set checksum flags according to packet type,
so for non L3/L4 packets the mbuf chksum_bad flags will not be set.
Fixes: 67fa62bc67 ("mlx5: support checksum offload")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
RSS configuration should not be freed when priv is NULL.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Or Ami <ora@mellanox.com>
Allows HW to strip the 802.1Q header from incoming frames and report it
through the mbuf structure.
This feature requires MLNX_OFED >= 3.2.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch enables reading sglort (global resource tag) info into the
mbuf for RX and inserting an FTAG (Fabric Tag) at the beginning of the
packet for TX. The vlan_tci_outer field selected from rte_mbuf structure
for sglort is not used in fm10k now.
In FTAG based forwarding mode, the switch will forward packets according
to glort info in FTAG rather than mac and vlan table.
To activate this feature, user needs to pass a devargs parameter to eal
for fm10k device like "-w 0000:84:00.0,enable_ftag=1". Currently this
feature is supported only on PF, because FM10K_PFVTCTL register is
read-only for VF.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Using SSE instructions to parse error flags in HW Rx descriptor,
then set corresponding bits of mbuf.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
When the TX function tries to free a bunch of mbufs, it will free
them one by one. This change will scan the free list and merge the
requests in case they belongs to same pool, then free once, which
will reduce cycles on freeing mbufs.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Previous l3fwd-power only processes IP and IPv6 packets, other
packets' mbufs are not freed, and this causes a memory leak.
This patch fixes this issue.
Fixes: 3c0184cc0c ("examples: replace some offload flags with packet type")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In interrupt mode, each rx queue can have one interrupt to notify the
application when packets are available in that queue. Some queues
also can share one interrupt.
Currently, fm10k needs one separate interrupt for mailbox. So, only those
drivers which support multiple interrupt vectors e.g. vfio-pci can work
in fm10k interrupt mode.
This patch uses the RXINT/INT_MAP registers to map interrupt causes
(rx queue and other events) to vectors, and enable these interrupts
through kernel drivers like vfio-pci.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
This patch implemented the ops of adding and removing mac
address in i40evf driver. Functions are assigned like:
.mac_addr_add = i40evf_add_mac_addr,
.mac_addr_remove = i40evf_del_mac_addr,
To support multiple mac addresses setting, this patch also
extended the mac addresses adding and deletion when device
start and stop. Each VF can have a maximum of 64 mac
addresses.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VEB switching feature for i40e is used to enable the switching between the
VSIs connect to the virtual bridge. The old implementation is setting the
virtual bridge mode as VEPA which is port aggregation. Enable the switching
ability by setting the loop back mode for the specific VSIs which connect
to PF or VFs.
VEB/VSI/VEPA are concepts not specific to the i40e HW, the concepts are
from 802.1qbg spec
IEEE EVB tutorial:
http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
VEB: a virtual switch can forward the packet based on the specific match
field.
VSI: a virtual interface connect between the VEB/VEPA and virtual machine.
VEPA: a virtual Ethernet port aggregator will upstream the packets from
VSI to the LAN port.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Currently, the example vmdq_dcb only works on Intel(R) 82599 NICs.
This patch extends this sample to make it work both on Intel(R) 82599
and X710/XL710 NICs by making the following changes:
1. add VMDQ base queue checking to avoid forwarding on PF queues.
2. assign each VMDQ pool to a MAC address.
3. add more arguments (nb-tcs, enable-rss) to change the default
setting
4. extend the max number of queues from 128 to 1024.
This patch also reworks the user guide for the vmdq_dcb sample.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Previously, DCB(Data Center Bridging) is only enabled on PF,
queue mapping and BW configuration is only done on PF.
This patch enables DCB for VMDQ VSIs(Virtual Station Interfaces)
by following steps:
1. Take BW and ETS(Enhanced Transmission Selection)
configuration on VEB(Virtual Ethernet Bridge).
2. Take BW and ETS configuration on VMDQ VSIs.
3. Update TC(Traffic Class) and queues mapping on VMDQ VSIs.
To enable DCB on VMDQ, the number of TCs should not be larger than
the number of queues in VMDQ pools, and the number of queues per
VMDQ pool is specified by CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
in config/common_* file.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Several structures and macros are added or updated, such
as 'struct i40e_aqc_get_link_status',
'struct i40e_aqc_run_phy_activity' and
'struct i40e_aqc_lldp_set_local_mib_resp'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
RX control register read/write functions are added, as directly
read/write may fail when under stress small traffic. After the
adminq is ready, all rx control registers should be read/written
by dedicated functions.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Generate a MAC address for each VF during PF host
initialization.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Normally the auto-negotiation is supported by FW. SW need not care about
that. But on x550em_x, FW doesn't support auto-neg. As the x550em_x ports
are 10G, if we connect the port will a peer which is 1G, the link will
always be down.
We need support auto-neg by SW to avoid this link down issue. As we already
have the code to handle the link speed setting, what we need is a trigger.
When the advertised link speed changes, a PHY interruption will be
triggered. So, we should handle this interrupt and call ixgbe_handle_lasi
to set the link speed correctly.
Please be aware it's working when auto-neg is on. If the auto-neg of the
peer port is turned off and its speed is indicated manually, we should also
set the speed of our own port manually.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Add multicast promiscuous mode support on ixgbe VF driver.
Please note if we want to use this promiscuous mode, we need both PF
and VF driver to support it. The reason is this VF feature is
configged on PF.
If use kernel PF driver + dpdk VF driver, make sure kernel PF driver
support VF multicast promiscuous mode. If use dpdk PF + dpdk VF,
better make sure PF driver is the same version as VF.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
The MDIO clock speed must be reconfigured after the MAC reset.
The MDIO clock speed becomes invalid, therefore the driver reads
invalid PHY register values. The driver now set the MDIO clock
speed prior to initializing PHY ops and again after the MAC reset.
As now the MDIO speed gets set in more than one place, make a
function for it so it will always be done correctly.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Do not set FDIRCTRL.DROP_NO_MATCH in ixgbe_init_fdir_perfect_82599(),
this bit is already set in ixgbe_set_fdir_drop_queue_82599() which
makes more sense for drivers that call that function.
This resolves an issue where packets were being dropped when switching
to perfect filters mode.
Setting this bit makes no sense in perfect filters mode for the
driver as we do not want to route all packets that don't match an FDIR
rule to a single queue and instead fall back to RSS.
Drivers that need this bit set can call ixgbe_set_fdir_drop_queue_82599()
and the ones that don't, can preserve the old behavior.
Fixes: 2241ce2816 ("ixgbe/base: add flow director drop queue")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch resolves an issue where VF mac address is zeroed out
in cases where the VF driver is loaded while the PF interface
is down.
The solution is to only set it when we get an ACK from the PF.
Fixes: 6202266e56 ("ixgbe/base: vf changes")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Only x550em_x V1 was supported before. Now V2 is supported.
A mask for V1 and V2 is defined and used to support both.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add new X550EM_a devices and their mac types, X550EM_a
and X550EM_a_vf.
Update the code to use the new devices and mac types.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
max_rx_pkt_len already includes ETHER_HDR_LEN and ETHER_CRC_LEN for the
mtu. But, the firmware also adds ETHER_HDR_LEN and ETHER_CRC_LEN to the
mtu specified. Fix by subtracting these values from the mtu before
passing it to firmware.
Fixes: 4b2eff452d ("cxgbe: enable jumbo frames")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
The size of each entry in the port's rss table is actually 2 bytes
and not 1 byte. A segfault occurs when accessing part of port 0's rss
table because it gets overwritten by subsequent port 1's part of the
rss table. Fix by setting the size of each entry appropriately.
Fixes: 92c8a63223 ("cxgbe: add device configuration and Rx support")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Change the fields of outer_mac and inner_mac in struct
rte_eth_tunnel_filter_conf from pointer to struct in order to
keep the code's readability.
Signed-off-by: Xutao Sun <xutao.sun@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The patch add VxLAN & NVGRE TX checksum off-load. When the flag of
outer IP header checksum offload is set, we'll set the context
descriptor to enable this checksum off-load.
Also update release notes for VxLAN & NVGRE checksum off-load support.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The names of function for tunnel port configuration are not
accurate. They're tunnel_add/del, better change them to
tunnel_port_add/del.
The old functions are directly replaced because the API and ABI
compatibility of ethdev are already broken in 16.04.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add the CLIs to support the E-tag operation.
1, Offloading of E-tag insertion and stripping.
2, Forwarding the E-tag packets to pools based on the GRP and E-CID_base.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add CLIs to config ether type of l2 tunnel, and to enable/disable
a type of l2 tunnel.
Now only e-tag tunnel is supported.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add support of l2 tunnel configuration and operations.
1, Support modifying ether type of a type of l2 tunnel.
2, Support enabling and disabling the support of a type of l2 tunnel.
3, Support enabling/disabling l2 tunnel tag insertion/stripping.
4, Support enabling/disabling l2 tunnel packets forwarding.
5, Support adding/deleting forwarding rules for l2 tunnel packets.
Only support E-tag now.
Also update the release note.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
In order to set ether type of VLAN for single VLAN, inner
and outer VLAN, the VLAN type as an input parameter is added
to 'rte_eth_dev_set_vlan_ether_type()'.
In addition, corresponding changes in e1000, ixgbe and i40e
are also added.
It is an ABI break but ethdev library is already bumped for 16.04.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Sample app implementing an IPsec Security Geteway.
The main goal of this app is to show the use of cryptodev framework
in a "real world" application.
Currently only supported static IPv4 ESP IPsec tunnels for the following
algorithms:
- Cipher: AES-CBC, NULL
- Authentication: HMAC-SHA1, NULL
Not supported:
- SA auto negotiation (No IKE implementation)
- chained mbufs
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch provides the implementation of a NULL crypto PMD, which supports
NULL cipher and NULL authentication operations, which can be chained together
as follows:
- Authentication Only
- Cipher Only
- Authentication then Cipher
- Cipher then Authentication
As this is a NULL operation device the crypto operations which are submitted for
processing are not actually modified and are stored in a queue pairs processed
packets ring ready for collection when rte_cryptodev_burst_dequeue() is called.
The patch also contains the related unit tests function to test the PMDs
supported operations.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
AES GCM on the cryptodev API was giving invalid results
in some cases, due to an incorrect IV setting.
Added AES GCM in the QAT supported algorithms,
as encryption/decryption is fully functional.
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: John Griffin <john.griffin@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch provides the implementation of an AES-NI accelerated crypto PMD
which is dependent on Intel's multi-buffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors"
This PMD supports AES_GCM authenticated encryption and authenticated
decryption using 128-bit AES keys
The patch also contains the related unit tests functions
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Wireless algorithms like Snow3G needs input in bits.
In this patch, changes have been made to incorporate this requirement
in both QAT and SW PMD.
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_AUTH_SNOW3G_UIA2
The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)
The patch also contains the related unit tests function to test the PMD
supported operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
As cryptodev library does not depend on mbuf_offload library
any longer, this patch removes it.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fill in the supported features matrix for CXGBE PMD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Currently, there is no mechanism that allows the pipeline ports (in/out)
and table action handlers to override the default forwarding decision
(as previously configured per input port or in the table entry). The port
(in/out) and table action handler prototypes have been changed to allow
pipeline action handlers (port in/out, table) to remove the selected
packets from the further pipeline processing and to take full ownership
for these packets. This feature will be helpful to implement functions
such as exception handling (e.g. TTL =0), load balancing etc.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
A new rte_lpm_config structure is used so LPM library will allocate
exactly the amount of memory which is necessary to hold application’s
rules.
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
This patch extend next_hop field from 8-bits to 24-bits in LPM library
for IPv4.
Added versioning symbols to functions and updated
library and applications that have a dependency on LPM library.
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
Add introductions on how to enable Vector FM10K Rx/Tx functions,
the preconditions and assumptions on Rx/Tx configuration parameters.
The new content also lists the limitations of vector, so app/customer
can do better to select best Rx/Tx functions.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch documents that the statistics of fm10k based NICs must be
read regularly in order to avoid an undetected 32 bit integer-overflow.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds a note to the ixgbe PMD guide, stating
the minimum time that statistics must be polled from
the hardware in order to avoid register values becoming
saturated and "sticking" to the max value.
Reported-by: Jerry Zhang <jerry.zhang@intel.com>
Tested-by: Marcin Kerlin <marcinx.kerlin@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Marcin Kerlin <marcinx.kerlin@intel.com>
Move the structure ``rte_eth_fdir_masks`` change announcement from ABI
to API in release notes.
Fixes: 1409f127d7 (ethdev: fix byte order consistency of flow director)
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
There was an ABI change in the release 16.04.
Fixes: fb76dd26a3 ("cmdline: increase command line buffer")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
There was an ABI change and more are coming in the release 16.04.
Fixes: a9963a86b2 ("ethdev: increase RETA entry size")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch adds a new function to the EAL API:
int rte_eal_primary_proc_alive(const char *path);
The function indicates if a primary process is alive right now.
This functionality is implemented by testing for a write-
lock on the config file, and the function tests for a lock.
The use case for this functionality is that a secondary
process can wait until a primary process starts by polling
the function and waiting. When the primary is running, the
secondary continues to poll to detect if the primary process
has quit unexpectedly, the secondary process can detect this.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Maryam Tahhan <maryam.tahhan@intel.com>
It deprecates sys files of 'extended_tag' and
'max_read_request_size' which was not documented.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Remove pci configuration of 'extended tag' and 'max read request
size', as they are not required by all devices and it lets PMD to
configure them if necessary.
In addition, 'pci_config_space_set()' is deprecated.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
PCIe feature of 'Extended Tag' is important for 40G performance.
It adds its enabling during each port initialization, to ensure
the high performance.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This patch fixes some mismatches between the keepalive code
and the docs. Struct names, and descriptions are not in line
with the codebase.
Fixes: e64833f227 ("examples/l2fwd-keepalive: add sample application")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add known issue about DPDK not compiling on some CPUs
with clang versions older than 3.7.0.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Announce that Malicious Driver Detection is not supported.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
MANY references in the sample applications user guide are wrong because
they are hard-coded and section numbers have changed over the time.
This patch changes thoses references to dynamic ones, in this way if
section numbers change the reference get updated automatically.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
When compiling for i686 targets compilation could fail
if the 32bit libc6-dev package is not installed. The
gcc-multilib packages is a meta-package that will pull
in the necessary dependencies, making setup easier for
beginners.
Reported-by: Weichun Chen <weichunx.chen@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Fixed issue of byte order in ethdev library that the structure
for setting fdir's mask and flow entry is inconsist and made
inputs of mask be in big endian.
Fixes: 2d4c1a9ea2 ("ethdev: add new flow director masks")
Fixes: 76c6f89e80 ("ixgbe: support new flow director masks")
Reported-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Several NICs can handle 512 entries/queues in their RETA table,
an 8 bit field is not large enough for them.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Allow long command lines in testpmd (like flow director with IPv6, ...).
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
removed _VIRTIO_PMD=n from arch config and let arch to use _VIRTIO_PMD
from config/common_linuxapp.
Signed-off-by: Santosh Shukla <sshukla@mvista.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add RTE_DEVEL_BUILD make-variable which can be used to do things
differently when doing development vs building a release,
autodetected from source root .git presence and overridable via
commandline. It is used it to enable -Werror compiler flag and may
be extended to other checks.
Failing build on warnings is a useful developer tool but its bad
for release tarballs which can and do get built with newer
compilers than what was used/available during development. Compilers
routinely add new warnings so code which built silently with cc X
might no longer do so with X+1. This doesn't make the existing code
any more buggier and failing the build in this case does not help
to improve the quality of an already released version either.
This change the default flags which can be tuned with EXTRA_CFLAGS.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The physically linked-together combined library has been an increasing
source of problems, as was predicted when library and symbol versioning
was introduced. Replace the complex and fragile construction with a
simple linker script which achieves the same without all the problems,
remove the related kludges from eg mlx drivers.
Since creating the linker script is practically zero cost, remove the
config option and just create it always.
Based on a patch by Sergio Gonzales Monroy, linker script approach
initially suggested by Neil Horman.
Suggested-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Suggested-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Fix crc32c hash functions to return a valid crc32c value for
data lengths not multiple of 4 bytes.
ARM code is not tested.
Fixes: af75078fec ("first public release")
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
rte_pktmbuf_alloc_bulk allocates a bulk of packet mbufs.
There is related thread about this bulk API.
http://dpdk.org/dev/patchwork/patch/4718/
Thanks to Konstantin's loop unrolling.
Attached the wiki page about duff's device. It explains the performance
optimization through loop unwinding, and also the most dramatic use of
case label fall-through.
https://en.wikipedia.org/wiki/Duff%27s_device
In this implementation, while() loop is used because we could not assume
count is strictly positive. Using while() loop saves one line of check.
Signed-off-by: Gerald Rogers <gerald.rogers@intel.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
How to reproduce:
1. Start vhost-switch
./examples/vhost/build/vhost-switch -c 0x3 -n 4 -- -p 1 --stat 0
2. Start VM with a virtio port
$ $QEMU -smp cores=2,sockets=1 -m 4G -cpu host -enable-kvm \
-chardev socket,id=char1,path=<path to vhost-user socket> \
-device virtio-net-pci,netdev=vhostuser1 \
-netdev vhost-user,id=vhostuser1,chardev=char1
-object memory-backend-file,id=mem,size=4G,mem-path=<hugetlbfs path>,share=on \
-numa node,memdev=mem -mem-prealloc \
-hda <path to VM img>
3. Start l2fwd in VM
$ ./examples/l2fwd/build/l2fwd -c 0x1 -n 4 -m 1024 -- -p 0x1
4. Use ixia to inject packets in a small data bit rate.
Error:
vhost-switch keeps printing error message:
failed to allocate memory for mbuf.
Root cause:
How many mbufs allocated for a port is calculated by below formula.
NUM_MBUFS_PER_PORT = ((MAX_QUEUES*RTE_TEST_RX_DESC_DEFAULT) + \
(num_switching_cores*MAX_PKT_BURST) + \
(num_switching_cores*RTE_TEST_TX_DESC_DEFAULT) +\
(num_switching_cores*MBUF_CACHE_SIZE))
We suppose num_switching_cores is 1 and MBUF_CACHE_SIZE is 128.
And when initializing port, master core fills mbuf mempool cache,
so there would be some left in that cache, for example 121.
So total mbufs which can be used is:
(MAX_PKT_BURST + MBUF_CACHE_SIZE - 121) = (32 + 128 - 121) = 39.
What makes it worse is that there is a buffer to store mbufs
(which will be tx_burst to physical port), if it occupies some mbufs,
there will be possible < 32 mbufs left, so vhost dequeue prints out
this msg.
In all, it fails to include master core's mbuf mempool cache.
Reported-by: Qian Xu <qian.q.xu@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Fixing the version of the kernel required in the QAT documentation.
Signed-off-by: John Griffin <john.griffin@intel.com>
Acked by: Declan Doherty <declan.doherty@intel.com>
cryptodev_aesni_mb_init was returning the device id of
the device just created, but rte_eal_vdev_init
(the function that calls the first one), was expecting 0 or
negative value.
This made impossible to create more than one aesni_mb device
from command line.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
To claim that we support vhost-user live migration support:
SET_LOG_BASE request will be send only when this feature flag
is set.
Besides this flag, we actually need another feature flag set
to make vhost-user live migration work: VHOST_F_LOG_ALL.
Which, however, has been enabled long time ago.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Add guest offload setting in vhost lib.
Virtio 1.0 spec (5.1.6.4 Processing of Incoming Packets) says:
1. If the VIRTIO_NET_F_GUEST_CSUM feature was negotiated, the
VIRTIO_NET_HDR_F_NEEDS_CSUM bit in flags can be set: if so,
the packet checksum at offset csum_offset from csum_start
and any preceding checksums have been validated. The checksum
on the packet is incomplete and csum_start and csum_offset
indicate how to calculate it (see Packet Transmission point 1).
2. If the VIRTIO_NET_F_GUEST_TSO4, TSO6 or UFO options were
negotiated, then gso_type MAY be something other than
VIRTIO_NET_HDR_GSO_NONE, and gso_size field indicates the
desired MSS (see Packet Transmission point 2).
In order to support these features, the following changes are added,
1. Extend 'VHOST_SUPPORTED_FEATURES' macro to add the offload features negotiation.
2. Enqueue these offloads: convert some fields in mbuf to the fields in virtio_net_hdr.
There are more explanations for the implementation.
For VM2VM case, there is no need to do checksum, for we think the
data should be reliable enough, and setting VIRTIO_NET_HDR_F_NEEDS_CSUM
at RX side will let the TCP layer to bypass the checksum validation,
so that the RX side could receive the packet in the end.
In terms of us-vhost, at vhost RX side, the offload information is
inherited from mbuf, which is in turn inherited from TX side. If we
can still get those info at RX side, it means the packet is from
another VM at same host. So, it's safe to set the
VIRTIO_NET_HDR_F_NEEDS_CSUM, to skip checksum validation.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add vhost TX offload (CSUM and TSO) support capabilities in vhost lib.
In order to support these features, and the following changes are added,
1. Extend 'VHOST_SUPPORTED_FEATURES' macro to add the offload features
negotiation.
2. Dequeue TX offload: convert the fileds in virtio_net_hdr to the
related fileds in mbuf.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Updated release documentation to reflect new numbering scheme.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
It was requested by Intel, more than one year ago, to replace the name
"Intel DPDK" by "DPDK".
Some references to the old name were still in some docs and code comments,
leading to confusion.
Fixes: ac8ada004c ("doc: remove Intel references from release notes")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
fix the error reported by checkpatch:
"ERROR: return is not a function, parentheses are not required"
remove parentheses in return like:
"return (logical expressions)"
remove parentheses in return a function like:
"return (rte_mempool_lookup(...))"
Fixes: 6307b909b8 ("lib: remove extra parenthesis after return")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
In order to better compare the drivers and check what is missing
for a common baseline, we need to fill a matrix.
A CSS trick is used to fit the HTML page.
The PDF output needs some LaTeX wizardry.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add a further ACL example where the elements of the search key
are not entirely fitting into the 4 consecutive bytes of all
input fields.
Signed-off-by: Antonio Fischetti <antonio.fischetti@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
* remove outdated chapter reference to Multi-process support.
* html output converts "--" to "-", this is wrong when explaining the
command arguments, used fixed width quotes for them.
Fixes: fc1f2750a3 ("doc: programmers guide")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>