The debug option only provided statistics to the user, most of
which could be tracked by the application itself. Remove this as a
compile time option, and feature, simplifying the code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Users compiling DPDK should not need to know or care about the arrangement
of cachelines in the rte_ring structure. Therefore just remove the build
option and set the structures to be always split. On platforms with 64B
cachelines, for improved performance use 128B rather than 64B alignment
since it stops the producer and consumer data being on adjacent cachelines.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Downstreams might want to provide different DPDK releases at the same
time to support multiple consumers of DPDK linked against older and newer
sonames.
Also due to the interdependencies that DPDK libraries can have applications
might end up with an executable space in which multiple versions of a
library are mapped by ld.so.
Think of LibA that got an ABI bump and LibB that did not get an ABI bump
but is depending on LibA.
Application
\-> LibA.old
\-> LibB.new -> LibA.new
That is a conflict which can be avoided by setting CONFIG_RTE_MAJOR_ABI.
If set CONFIG_RTE_MAJOR_ABI overwrites any LIBABIVER value.
An example might be ``CONFIG_RTE_MAJOR_ABI=16.11`` which will make all
libraries librte<?>.so.16.11 instead of librte<?>.so.<LIBABIVER>.
We need to cut arbitrary long stings after the .so now and this would work
for any ABI version in LIBABIVER:
$(Q)ln -s -f $< $(patsubst %.$(LIBABIVER),%,$@)
But using the following instead additionally allows to simplify the Make
File for the CONFIG_RTE_NEXT_ABI case.
$(Q)ln -s -f $< $(shell echo $@ | sed 's/\.so.*/.so/')
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Reviewed-by: Jan Blunck <jblunck@infradead.org>
Tested-by: Jan Blunck <jblunck@infradead.org>
Re-enable CONFIG_RTE_LIBRTE_SCHED, since it is needed to build
correctly.
Fix a few warnings when compiling mpipe_tilegx.c.
Remove an empty rte_cpu_feature_table[] array using a bogus type.
Properly set RTE_OBJCOPY_{TARGET,ARCH} in mk/arch/tile/rte.vars.mk.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Remove RTE_LIBRTE_SFC_EFX_TSO config option since it is not
required any more:
- unreasonable limit on number of Tx queues when TSO is not
actually required should be solved using per-device parameter
- performance difference with and without TSO compiled in is small
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patchset introduce new application which allows measuring
performance parameters of PMDs available in crypto tree. The goal of
this application is to replace existing performance tests in app/test.
Parameters available are: throughput (--ptest throughput) and latency
(--ptest latency). User can use multiply cores to run tests on but only
one type of crypto PMD can be measured during single application
execution. Cipher parameters, type of device, type of operation and
chain mode have to be specified in the command line as application
parameters. These parameters are checked using device capabilities
structure.
Couple of new library functions in librte_cryptodev are introduced for
application use.
To build the application a CONFIG_RTE_APP_CRYPTO_PERF flag has to be set
(it is set by default).
Example of usage: -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12
--total-ops 10000000 --burst-sz 32 --buffer-sz 64
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Signed-off-by: Marcin Kerlin <marcinx.kerlin@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.
Adds scheduler PMD enable and debug flags to config/common_base.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
KNI ethtool support (KNI control path) is not commonly used,
and it tends to break the build with new version of the Linux kernel.
KNI ethtool feature is disabled by default. KNI datapath is not effected
from this update.
It is possible to enable feature explicitly with config option:
"CONFIG_RTE_KNI_KMOD_ETHTOOL=y"
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch introduces crypto poll mode driver
using ARMv8 cryptographic extensions.
CPU compatibility with this driver is detected in
run-time and virtual crypto device will not be
created if CPU doesn't provide:
AES, SHA1, SHA2 and NEON.
This PMD is optimized to provide performance boost
for chained crypto operations processing,
such as encryption + HMAC generation,
decryption + HMAC validation. In particular,
cipher only or hash only operations are
not provided.
The driver currently supports AES-128-CBC
in combination with: SHA256 HMAC and SHA1 HMAC
and relies on the external armv8_crypto library:
https://github.com/caviumnetworks/armv8_crypto
Build ARMv8 crypto PMD if compiling for ARM64
and CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO option
is enable in the configuration file.
ARMV8_CRYPTO_LIB_PATH environment variable will
point to the appropriate library directory.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Elastic Flow Distributor (EFD) is a distributor library that uses
perfect hashing to determine a target/value for a given incoming flow key.
It has the following advantages:
- First, because it uses perfect hashing, it does not store
the key itself and hence lookup performance is not dependent
on the key size.
- Second, the target/value can be any arbitrary value hence
the system designer and/or operator can better optimize service rates
and inter-cluster network traffic locating.
- Third, since the storage requirement is much smaller than a hash-based
flow table (i.e. better fit for CPU cache), EFD can scale to
millions of flow keys.
Finally, with current optimized library implementation performance
is fully scalable with number of CPU cores.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Acked-by: Christian Maciocco <christian.maciocco@intel.com>
Because using a NFP PMD requires specific BSP installed, the PMD
support was not the default option before. This was just for making
people aware of such dependency, since there is no need for such a
BSP for just compiling DPDK with NFP PMD support.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Add PCI device ID for ConnectX-5 and enable multi-packet send for PF and VF
along with changing documentation and release note.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Reviewed-by: Mark Spender <mspender@solarflare.com>
Reviewed-by: Robert Stonehouse <rstonehouse@solarflare.com>
The PMD allows for DPDK and the host to communicate using a raw
device interface on the host and in the DPDK application. The device
created is a Tap device with a L2 packet header.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Aws Ismail <aismail@ciena.com>
Tested-by: Vasily Philipov <vasilyf@mellanox.com>
Enable the PMD by default on supported configurations.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added API for `rte_eth_tx_prepare`
uint16_t rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
Added fields to the `struct rte_eth_desc_lim`:
uint16_t nb_seg_max;
/**< Max number of segments per whole packet. */
uint16_t nb_mtu_seg_max;
/**< Max number of segments per one MTU */
These fields can be used to create valid packets according to the
following rules:
* For non-TSO packet, a single transmit packet may span up to
"nb_mtu_seg_max" buffers.
* For TSO packet the total number of data descriptors is "nb_seg_max",
and each segment within the TSO may span up to "nb_mtu_seg_max".
Added functions:
int
rte_validate_tx_offload(struct rte_mbuf *m)
to validate general requirements for tx offload set in mbuf of packet
such a flag completness. In current implementation this function is
called optionaly when RTE_LIBRTE_ETHDEV_DEBUG is enabled.
int rte_net_intel_cksum_prepare(struct rte_mbuf *m)
to prepare pseudo header checksum for TSO and non-TSO tcp/udp packets
before hardware tx checksum offload.
- for non-TSO tcp/udp packets full pseudo-header checksum is
counted and set.
- for TSO the IP payload length is not included.
int
rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
this function uses same logic as rte_net_intel_cksum_prepare, but
allows application to choose which offloads should be taken into
account, if full preparation is not required.
PERFORMANCE TESTS
-----------------
This feature was tested with modified csum engine from test-pmd.
The packet checksum preparation was moved from application to Tx
preparation step placed before burst.
We may expect some overhead costs caused by:
1) using additional callback before burst,
2) rescanning burst,
3) additional condition checking (packet validation),
4) worse optimization (e.g. packet data access, etc.)
We tested it using ixgbe Tx preparation implementation with some parts
disabled to have comparable information about the impact of different
parts of implementation.
IMPACT:
1) For unimplemented Tx preparation callback the performance impact is
negligible,
2) For packet condition check without checksum modifications (nb_segs,
available offloads, etc.) is 14626628/14252168 (~2.62% drop),
3) Full support in ixgbe driver (point 2 + packet checksum
initialization) is 14060924/13588094 (~3.48% drop)
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There was an option CONFIG_RTE_INSECURE_FUNCTION_WARNING (disabled by
default), which prevents from using some libc functions:
sprintf, snprintf, vsnprintf, strcpy, strncpy, strcat, strncat, sscanf,
strtok, strsep and strlen.
It's all about using them at the right place with the right precautions.
However, it is neither really possible nor a good advice to disable them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Today, all logs whose level is lower than INFO are dropped at
compile-time. This prevents from enabling debug logs at runtime using
--log-level=8.
The rationale was to remove debug logs from the data path at
compile-time, avoiding a test at run-time.
This patch changes the behavior of RTE_LOG() to avoid the compile-time
optimization, and introduces the RTE_LOG_DP() macro that has the same
behavior than the previous RTE_LOG(), for the rare cases where debug
logs are in the data path.
So it is now possible to enable debug logs at run-time by just
specifying --log-level=8. Some drivers still have special compile-time
options to enable more debug log. Maintainers may consider to
remove/reduce them.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
- Fix to use bitmapped values in NVM configuration for speed capability
advertisement. This issue is specific to 25G NIC since it is capable
of 25G and 10G speeds.
- Update feature list.
Fixes: 64c239b7f8 ("net/qede: fix advertising link speed capability")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
This patch replaces name "libcrypto" to "openssl" from file directories,
symbol prefixes and sub-names connected with old name.
Renamed poll mode driver files, test files, and documentations.
It is done to better name association with library because
the cryptography operations are using Openssl library crypto API.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The QEDE PMD now uses unzipped firmware file eliminating the dependency
on zlib. Hence remove LDLIBS entry form the Makefile and enable qede
PMD by default.
Fixes: 6adac0bf30 ("qede: add missing external dependency and disable by default")
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Use ARM NEON intrinsic to implement i40e vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since switched to kernel dynamic debugging it is possible to remove
compile time debug log configuration.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
This code provides the initial implementation of the libcrypto
poll mode driver. All cryptography operations are using Openssl
library crypto API. Each algorithm uses EVP_ interface from
openssl API - which is recommended by Openssl maintainers.
This patch adds libcrypto poll mode driver support to librte_cryptodev
library.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms ZUC EEA3 and EIA3
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
- RTE_CRYPTO_SYM_AUTH_ZUC_EIA3
The ZUC hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
And add read memory barrier to avoid status inconsistency
between two Rx descriptors readings.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
remove vhost-cuse code, including the eventfd_link kernel module that
is for vhost-cuse only.
The lib/virt/qemu-wrap.py is also removed, as it's mainly for vhost-cuse
usage.
As we have one vhost implementation now, one vhost config option is
needed only. Thus, CONFIG_RTE_LIBRTE_VHOST_USER is removed.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds port for ACL library in ppc64le.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds ppc64le port for LPM library in DPDK.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Following discussions on the mailing list [1] and since nobody stood up to
implement the necessary cleanups, here is the ivshmem integration removal.
There is not much to say about this patch, a lot of code is being removed.
The default configuration file for packet_ordering example is replaced with
the "native" x86 file.
The only tricky part is in eal_memory with the memseg index stuff.
More cleanups can be done after this but will come in subsequent patchsets.
[1]: http://dpdk.org/ml/archives/dev/2016-June/040844.html
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
With current config structure, all configuration parameters put into
common_base with a default value, and overwritten in environment file
if required, CONFIG_RTE_VIRTIO_USER is missing in common_base.
This fix is simple, by adding CONFIG_RTE_VIRTIO_USER=n as the default
macro value.
Fixes: ce2eabdd43 ("net/virtio-user: add virtual device")
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Inline TX will be fully managed by the PMD after Verbs is bypassed in the
data path. Remove the current code until then.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no scatter/gather support anymore, CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N
has no purpose and can be removed.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
- Add device id to the PCI table
- Add polling for the slowpath events for CMT mode device
- Add prerequisites to allow 100g mode
* Min number of queues needed is 2
* Only even number of queues are allowed
- Update documentation
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.
By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <kamil.rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
This patch adds the initial skeleton for bnxt driver along with the
nic guide, and ties the driver into the build system.
At this point, the driver simply fails init.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
[Release Note Addition]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.
Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The librte_pdump library provides a framework for
packet capturing in dpdk. The library provides set of
APIs to initialize the packet capture framework, to
enable or disable the packet capture, and to uninitialize
it.
The librte_pdump library works on a client/server model.
The server is responsible for enabling or disabling the
packet capture and the clients are responsible
for requesting the enabling or disabling of the packet
capture.
Enabling APIs are supported with port, queue, ring and
mempool parameters. Applications should pass on this information
to get the packets from the dpdk ports.
For enabling requests from applications, library creates the client
request containing the mempool, ring, port and queue information and
sends the request to the server. After receiving the request, server
registers the Rx and Tx callbacks for all the port and queues.
After the callbacks registration, registered callbacks will get the
Rx and Tx packets. Packets then will be copied to the new mbufs that
are allocated from the user passed mempool. These new mbufs then will
be enqueued to the application passed ring. Applications need to dequeue
the mbufs from the rings and direct them to the devices like
pcap vdev for viewing the packets outside of the dpdk
using the packet capture tools.
For disabling requests, library creates the client request containing
the port and queue information and sends the request to the server.
After receiving the request, server removes the Rx and Tx callback
for all the port and queues.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>