The following APIs's are implemented in the
librte_flow_classify library:
rte_flow_classifier_create
rte_flow_classifier_free
rte_flow_classifier_query
rte_flow_classify_table_create
rte_flow_classify_table_entry_add
rte_flow_classify_table_entry_delete
The following librte_table API's are used:
f_create to create a table.
f_add to add a rule to the table.
f_del to delete a rule from the table.
f_free to free a table
f_lookup to match packets with the rules.
The library supports counting of IPv4 five tupple packets only,
ie IPv4 UDP, TCP and SCTP packets.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Add mrvl net pmd driver skeleton providing base for the further
development. Besides the basic functionality QoS configuration is
introduced as well.
Signed-off-by: Jacek Siuda <jck@semihalf.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Generic Segmentation Offload (GSO) is a SW technique to split large
packets into small ones. Akin to TSO, GSO enables applications to
operate on large packets, thus reducing per-packet processing overhead.
To enable more flexibility to applications, DPDK GSO is implemented
as a standalone library. Applications explicitly use the GSO library
to segment packets. To segment a packet requires two steps. The first
is to set proper flags to mbuf->ol_flags, where the flags are the same
as that of TSO. The second is to call the segmentation API,
rte_gso_segment(). This patch introduces the GSO API framework to DPDK.
rte_gso_segment() splits an input packet into small ones in each
invocation. The GSO library refers to these small packets generated
by rte_gso_segment() as GSO segments. Each of the newly-created GSO
segments is organized as a two-segment MBUF, where the first segment is a
standard MBUF, which stores a copy of packet header, and the second is an
indirect MBUF which points to a section of data in the input packet.
rte_gso_segment() reduces the refcnt of the input packet by 1. Therefore,
when all GSO segments are freed, the input packet is freed automatically.
Additionally, since each GSO segment has multiple MBUFs (i.e. 2 MBUFs),
the driver of the interface which the GSO segments are sent to should
support to transmit multi-segment packets.
The GSO framework clears the PKT_TX_TCP_SEG flag for both the input
packet, and all produced GSO segments in the event of success, since
segmentation in hardware is no longer required at that point.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Currently, enabling assertion have to set CONFIG_RTE_LOG_LEVEL to
RTE_LOG_DEBUG. CONFIG_RTE_LOG_LEVEL is the default log level of control
path, RTE_LOG_DP_LEVEL is the log level of data path. It's a little bit
hard to understand literally that assertion is decided by control path
LOG_LEVEL, especially assertion used on data path.
On the other hand, DPDK need an assertion enabling switch w/o impacting
log output level, assuming "--log-level" not specified.
Assertion is an important API to balance DPDK high performance and
robustness. To promote assertion usage, it's valuable to unhide
assertion out of COFNIG_RTE_LOG_LEVEL.
In one word, log is log, assertion is assertion, debug is hot pot :)
Rationale of this patch is to introduce an dedicate switch of
assertion: RTE_ENABLE_ASSERT
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
We remove xen-specific code in EAL, including the option --xen-dom0,
memory initialization code, compiling dependency, etc.
Related documents are removed or updated, and bump the eal library
version.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Membership library is an extension and generalization of a traditional
filter (for example Bloom Filter and cuckoo filter) structure.
In general, the Membership library is a data structure that provides a
"set-summary" and responds to set-membership queries of whether a
certain element belongs to a set(s). A membership test for an element
will return the set this element belongs to or not-found if the
element is never inserted into the set-summary.
The results of the membership test are not 100% accurate. Certain
false positive or false negative probability could exist. However,
comparing to a "full-blown" complete list of elements, a "set-summary"
is memory efficient and fast on lookup.
This patch adds the main API definition.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The kernel patch was merged to support pci resource mapping.
https://patchwork.kernel.org/patch/9677441/
So enable igu_uio in the default arm64 configuration.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch also adds configuration necessary for compilation of DPAA
Mempool driver into the DPAA specific config file.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS=dpaa is also configured to allow
applications to use DPAA mempool as default.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
This option both sets the maximum number of segments for Rx/Tx packets and
whether scattered mode is supported at all. This commit removes the latter
as well as configuration file exposure since the most appropriate value
should be decided at run-time.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The patch simplifies DPDK applications analysis for developers which use
Intel® VTune Amplifier.
The empty cycles are such iterations that yielded no RX packets. As far as
DPDK is running in poll mode, wasting cycles is equal to wasting CPU time.
Tracing such iterations can identify that device is underutilized. Tracing
empty cycles becomes even more critical if a system uses a lot of Ethernet
ports.
The patch gives possibility to analyze empty cycles without changing
application code. All needs to be done is just to reconfigure and rebuild
the DPDK itself with CONFIG_RTE_ETHDEV_PROFILE_ITT_WASTED_RX_ITERATIONS
enbled. The important thing here is that this does not affect DPDK code.
The profiling code is not being compiled if user does not specify config
flag.
The patch provides common way to inject RX queues profiling and VTune
specific implementation.
Signed-off-by: Ilia Kurakin <ilia.kurakin@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Current mlx4 OFED version has bug which returns error to
ibv destroy functions when the device was plugged out, in
spite of the resources were destroyed correctly.
Hence, failsafe PMD was aborted, only in debug mode, when
it tries to remove the device in plug-out process.
The workaround added option to replace all claim_zero
assertions with debugging messages, by the way, this option
affects non ibv destroy assertions.
DPDK 18.02 release should work with Mellanox OFED-4.2 which will
include the verbs fix to this bug, then, this patch can
be removed.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Earlier bonding pmd was disabled in default config for ppc64le.
Hence, removing it as it has been verified.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Introduce the fail-safe poll mode driver initialization and enable its
build infrastructure.
This PMD allows for applications to benefit from true hot-plugging
support without having to implement it.
It intercepts and manages Ethernet device removal events issued by
slave PMDs and re-initializes them transparently when brought back.
It also allows defining a contingency to the removal of a device, by
designating a fail-over device that will take on transmitting operations
if the preferred device is removed.
Applications only see a fail-safe instance, without caring for
underlying activity ensuring their continued operations.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
NXP Copyright has been wrongly worded with '(c)' at various places.
This patch removes these extra characters. It also removes
"All rights reserved".
Only NXP copyright syntax is changed. Freescale copyright is not
modified.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Generic Receive Offload (GRO) is a widely used SW-based offloading
technique to reduce per-packet processing overhead. It gains
performance by reassembling small packets into large ones. This
patchset is to support GRO in DPDK. To support GRO, this patch
implements a GRO API framework.
To enable more flexibility to applications, DPDK GRO is implemented as
a user library. Applications explicitly use the GRO library to merge
small packets into large ones. DPDK GRO provides two reassembly modes.
One is called lightweight mode, the other is called heavyweight mode.
If applications want to merge packets in a simple way and the number
of packets is relatively small, they can use the lightweight mode.
If applications need more fine-grained controls, they can choose the
heavyweight mode.
rte_gro_reassemble_burst is the main reassembly API which is used in
lightweight mode and processes N packets at a time. For applications,
performing GRO in lightweight mode is simple. They just need to invoke
rte_gro_reassemble_burst. Applications can get GROed packets as soon as
rte_gro_reassemble_burst returns.
rte_gro_reassemble is the main reassembly API which is used in
heavyweight mode and tries to merge N inputted packets with the packets
in GRO reassembly tables. For applications, performing GRO in heavyweight
mode is relatively complicated. Before performing GRO, applications need
to create a GRO context object, which keeps reassembly tables of
desired GRO types, by rte_gro_ctx_create. Then applications can use
rte_gro_reassemble to merge packets. The GROed packets are in the
reassembly tables of the GRO context object. If applications want to get
them, applications need to manually flush them by flush API.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Replace the incorrect reference to "Cavium Networks", "Cavium Ltd"
company name with correct the "Cavium, Inc" company name in
copyright headers.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The dpdk-test-eventdev tool is a Data Plane Development Kit (DPDK)
application that allows exercising various eventdev use cases. This
application has a generic framework to add new eventdev based test cases
to verify functionality and measure the performance parameters of DPDK
eventdev devices.
This patch adds the skeleton of the dpdk-test-eventdev application.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Moved all common defines from defconfig_arm64-armv8a-linuxapp-gcc
to common_armv8a_linuxapp.
Created new config arm64-armv8a-linuxapp-clang which adds the
clang support to armv8a.
Now defconfigs arm64-armv8a-linuxapp-gcc/clang contain only the
CONFIG_RTE_TOOLCHAIN* defines and all other common defines are
inherited from common_armv8a_linuxapp.
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
* Removed setting CONFIG_RTE_SCHED_VECTOR=n from armv8a config
so that the setting from common_base is taken as the default
setting for armv8a
* Verified the changes with sched_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
It is safe to enable LIBRTE_VHOST_NUMA by default for all
configurations where libnuma is already a default dependency.
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Currently EAL allocates hugepages one by one not paying attention
from which NUMA node allocation was done.
Such behaviour leads to allocation failure if number of available
hugepages for application limited by cgroups or hugetlbfs and
memory requested not only from the first socket.
Example:
# 90 x 1GB hugepages availavle in a system
cgcreate -g hugetlb:/test
# Limit to 32GB of hugepages
cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test
# Request 4GB from each of 2 sockets
cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ...
EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB
EAL: 32 not 90 hugepages of size 1024 MB allocated
EAL: Not enough memory available on socket 1!
Requested: 4096MB, available: 0MB
PANIC in rte_eal_init():
Cannot init memory
This happens beacause all allocated pages are
on socket 0.
Fix this issue by setting mempolicy MPOL_PREFERRED for each hugepage
to one of requested nodes using following schema:
1) Allocate essential hugepages:
1.1) Allocate as many hugepages from numa N to
only fit requested memory for this numa.
1.2) repeat 1.1 for all numa nodes.
2) Try to map all remaining free hugepages in a round-robin
fashion.
3) Sort pages and choose the most suitable.
In this case all essential memory will be allocated and all remaining
pages will be fairly distributed between all requested nodes.
New config option RTE_EAL_NUMA_AWARE_HUGEPAGES introduced and
enabled by default for linuxapp except armv7 and dpaa2.
Enabling of this option adds libnuma as a dependency for EAL.
Fixes: 77988fc08d ("mem: fix allocating all free hugepages")
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Move all bypass functions to ixgbe pmd and remove function
pointers from the eth_dev_ops struct.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
TX coalescing waits for ETH_COALESCE_PKT_NUM packets to be coalesced
across bursts before transmitting them. For slow traffic, such as
100 PPS, this approach increases latency since packets are received
one at a time and tx coalescing has to wait for ETH_COALESCE_PKT
number of packets to arrive before transmitting.
To fix this:
- Update rx path to use status page instead and only receive packets
when either the ingress interrupt timer threshold (5 us) or
the ingress interrupt packet count threshold (32 packets) fires.
(i.e. whichever happens first).
- If number of packets coalesced is <= number of packets sent
by tx burst function, stop coalescing and transmit these packets
immediately.
Also added compile time option to favor throughput over latency by
default.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
DPAA2 devices now support cortex-a72. They no longer support a57.
Also fp and simd is no more required to be stated explicitly for
standard a72 core.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Stub callbacks for the generic flow API and a new FLOW debug define.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
When building DPDK with musl, there is need not to disable
backtrace to remove some references to execinfo.h which is
not supported by musl now.
This also applies to some other libc implementation which
doesn't support backtrace() and backtrace_symbols().
musl is an implementation of the userspace portion
of the standard library functionality described in
the ISO C and POSIX standards, plus common extensions.
Got more details about musl from http://www.musl-libc.org .
Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Making AVX and AVX512 configurable is useful for performance and power
testing.
The similar kernel patch at https://patchwork.kernel.org/patch/9618883/.
AVX512 support like in rte_memcpy has been in DPDK since 16.04, but it's
still unproven in rich use cases in hardware. Therefore it's marked as
experimental for now, will enable it after enough field test and possible
optimization.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Reviewed-by: Zhiyong Yang <zhiyong.yang@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
armv8 implementations may have 64B or 128B cache line.
Setting to the maximum available cache line size in generic config to
address minimum DMA alignment across all arm64 implementations.
Increasing the cacheline size has no negative impact to cache invalidation
on systems with a smaller cache line.
The need for the minimum DMA alignment has impact on functional aspects
of the platform so default config should cater the functional aspects.
There is an impact on memory usage with this scheme, but that's not too
important for the single image arm64 distribution use case.
The arm64 linux kernel followed the similar approach for single
arm64 image use case.
http://lxr.free-electrons.com/source/arch/arm64/include/asm/cache.h
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
DPAA2 Hardware Mempool handlers allow enqueue/dequeue from NXP's
QBMAN hardware block.
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS is set to 'dpaa2', if the pool
is enabled.
This memory pool currently supports packet mbuf type blocks only.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Having packets received without any offload flags given in the mbuf is not
very useful, and performance tests with testpmd indicates little
benefit is got with the current code by turning off the flags. This makes
the build-time option pointless, so we can remove it.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Having packets received without any offload flags given in the mbuf is not
very useful, and performance tests with testpmd indicates little to no
benefit is got with the current code by turning off the flags. This makes
the build-time option pointless, so we can remove it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
The AVP devices are only supported on Intel 64-bit architectures so
adjusting the defconfig attributes accordingly.
Fixes: 908072e9d0 ("net/avp: support driver registration")
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Enable Arkville on supported configurations
Add overview documentation
Minimum driver support for valid compile
Arkville PMD is not supported on ARM or PowerPC at this time
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Signed-off-by: John Miller <john.miller@atomicrules.com>
The crypto scheduler PMD has no external dependencies to enable that by
default.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Add a library designed to calculate latency statistics and report them
to the application when queried. The library measures minimum, average and
maximum latencies, and jitter in nano seconds. The current implementation
supports global latency stats, i.e. per application stats.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds a library that calculates peak and average data-rate
statistics. For ethernet devices. These statistics are reported using
the metrics library.
Signed-off-by: Remy Horton <remy.horton@intel.com>
This patch adds a new information metrics library. This Metrics
library implements a mechanism by which producers can publish
numeric information for later querying by consumers. Metrics
themselves are statistics that are not generated by PMDs, and
hence are not reported via ethdev extended statistics.
Metric information is populated using a push model, where
producers update the values contained within the metric
library by calling an update function on the relevant metrics.
Consumers receive metric information by querying the central
metric data, which is held in shared memory.
Signed-off-by: Remy Horton <remy.horton@intel.com>
This adds the minimal changes to allow a SW eventdev implementation to
be compiled, linked and created at run time. The eventdev does nothing,
but can be created via vdev on commandline, e.g.
sudo ./x86_64-native-linuxapp-gcc/app/test --vdev=event_sw0
...
PMD: Creating eventdev sw device event_sw0, numa_node=0, sched_quanta=128
RTE>>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The skeleton driver facilitates, bootstrapping the new
eventdev driver and creates a platform to verify
the northbound eventdev common code.
The driver supports both VDEV and PCI based eventdev
devices.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch implements northbound eventdev API interface using
southbond driver interface
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Adds the initial framework for registering the driver against the support
PCI device identifiers.
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: Vincent Jardin <vincent.jardin@6wind.com>
Adds a header file with log macros for the AVP PMD
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: Vincent Jardin <vincent.jardin@6wind.com>
This commit introduces the AVP PMD file structure without adding any actual
driver functionality. Functional blocks will be added in later patches.
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: Vincent Jardin <vincent.jardin@6wind.com>
Add debug options to config file. Define macros used for log and make
use of config file options to enable them.
Signed-off-by: Shijith Thotton <shijith.thotton@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com>
Signed-off-by: Venkat Koppula <venkat.koppula@caviumnetworks.com>
Signed-off-by: Srisivasubramanian S <ssrinivasan@caviumnetworks.com>
Signed-off-by: Mallesham Jatharakonda <mjatharakonda@oneconvergence.com>
Enable Thunderx nicvf PMD driver in the common
config as it does not have build dependency
with any external library and/or architecture.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This patch enables i40e driver in PowerPC along with its altivec
intrinsic support.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Add KNI PMD which wraps librte_kni for ease of use.
KNI PMD can be used as any regular PMD to send / receive packets to the
Linux networking stack.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Yong Wang <yongwang@vmware.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Moved from lib/librte_mempool, stack mempool handler is an independent
driver.
Shared builds would now require to link in librte_mempool_stack for
"stack" mempool handler.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Moved from lib/librte_mempool, ring mempool is now an independent
driver.
Shared builds would now need to add librte_mempool_ring for:
* ring_mp_mc
* ring_sp_sc
* ring_sp_mc
* ring_mp_sc
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
There was a compile time setting to enable a ring to yield when
it entered a loop in mp or mc rings waiting for the tail pointer update.
Build time settings are not recommended for enabling/disabling features,
and since this was off by default, remove it completely. If needed, a
runtime enabled equivalent can be used.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The debug option only provided statistics to the user, most of
which could be tracked by the application itself. Remove this as a
compile time option, and feature, simplifying the code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Users compiling DPDK should not need to know or care about the arrangement
of cachelines in the rte_ring structure. Therefore just remove the build
option and set the structures to be always split. On platforms with 64B
cachelines, for improved performance use 128B rather than 64B alignment
since it stops the producer and consumer data being on adjacent cachelines.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Downstreams might want to provide different DPDK releases at the same
time to support multiple consumers of DPDK linked against older and newer
sonames.
Also due to the interdependencies that DPDK libraries can have applications
might end up with an executable space in which multiple versions of a
library are mapped by ld.so.
Think of LibA that got an ABI bump and LibB that did not get an ABI bump
but is depending on LibA.
Application
\-> LibA.old
\-> LibB.new -> LibA.new
That is a conflict which can be avoided by setting CONFIG_RTE_MAJOR_ABI.
If set CONFIG_RTE_MAJOR_ABI overwrites any LIBABIVER value.
An example might be ``CONFIG_RTE_MAJOR_ABI=16.11`` which will make all
libraries librte<?>.so.16.11 instead of librte<?>.so.<LIBABIVER>.
We need to cut arbitrary long stings after the .so now and this would work
for any ABI version in LIBABIVER:
$(Q)ln -s -f $< $(patsubst %.$(LIBABIVER),%,$@)
But using the following instead additionally allows to simplify the Make
File for the CONFIG_RTE_NEXT_ABI case.
$(Q)ln -s -f $< $(shell echo $@ | sed 's/\.so.*/.so/')
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Reviewed-by: Jan Blunck <jblunck@infradead.org>
Tested-by: Jan Blunck <jblunck@infradead.org>
Re-enable CONFIG_RTE_LIBRTE_SCHED, since it is needed to build
correctly.
Fix a few warnings when compiling mpipe_tilegx.c.
Remove an empty rte_cpu_feature_table[] array using a bogus type.
Properly set RTE_OBJCOPY_{TARGET,ARCH} in mk/arch/tile/rte.vars.mk.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Remove RTE_LIBRTE_SFC_EFX_TSO config option since it is not
required any more:
- unreasonable limit on number of Tx queues when TSO is not
actually required should be solved using per-device parameter
- performance difference with and without TSO compiled in is small
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patchset introduce new application which allows measuring
performance parameters of PMDs available in crypto tree. The goal of
this application is to replace existing performance tests in app/test.
Parameters available are: throughput (--ptest throughput) and latency
(--ptest latency). User can use multiply cores to run tests on but only
one type of crypto PMD can be measured during single application
execution. Cipher parameters, type of device, type of operation and
chain mode have to be specified in the command line as application
parameters. These parameters are checked using device capabilities
structure.
Couple of new library functions in librte_cryptodev are introduced for
application use.
To build the application a CONFIG_RTE_APP_CRYPTO_PERF flag has to be set
(it is set by default).
Example of usage: -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12
--total-ops 10000000 --burst-sz 32 --buffer-sz 64
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Signed-off-by: Marcin Kerlin <marcinx.kerlin@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.
Adds scheduler PMD enable and debug flags to config/common_base.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
KNI ethtool support (KNI control path) is not commonly used,
and it tends to break the build with new version of the Linux kernel.
KNI ethtool feature is disabled by default. KNI datapath is not effected
from this update.
It is possible to enable feature explicitly with config option:
"CONFIG_RTE_KNI_KMOD_ETHTOOL=y"
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch introduces crypto poll mode driver
using ARMv8 cryptographic extensions.
CPU compatibility with this driver is detected in
run-time and virtual crypto device will not be
created if CPU doesn't provide:
AES, SHA1, SHA2 and NEON.
This PMD is optimized to provide performance boost
for chained crypto operations processing,
such as encryption + HMAC generation,
decryption + HMAC validation. In particular,
cipher only or hash only operations are
not provided.
The driver currently supports AES-128-CBC
in combination with: SHA256 HMAC and SHA1 HMAC
and relies on the external armv8_crypto library:
https://github.com/caviumnetworks/armv8_crypto
Build ARMv8 crypto PMD if compiling for ARM64
and CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO option
is enable in the configuration file.
ARMV8_CRYPTO_LIB_PATH environment variable will
point to the appropriate library directory.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Elastic Flow Distributor (EFD) is a distributor library that uses
perfect hashing to determine a target/value for a given incoming flow key.
It has the following advantages:
- First, because it uses perfect hashing, it does not store
the key itself and hence lookup performance is not dependent
on the key size.
- Second, the target/value can be any arbitrary value hence
the system designer and/or operator can better optimize service rates
and inter-cluster network traffic locating.
- Third, since the storage requirement is much smaller than a hash-based
flow table (i.e. better fit for CPU cache), EFD can scale to
millions of flow keys.
Finally, with current optimized library implementation performance
is fully scalable with number of CPU cores.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Acked-by: Christian Maciocco <christian.maciocco@intel.com>
Because using a NFP PMD requires specific BSP installed, the PMD
support was not the default option before. This was just for making
people aware of such dependency, since there is no need for such a
BSP for just compiling DPDK with NFP PMD support.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Add PCI device ID for ConnectX-5 and enable multi-packet send for PF and VF
along with changing documentation and release note.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Reviewed-by: Mark Spender <mspender@solarflare.com>
Reviewed-by: Robert Stonehouse <rstonehouse@solarflare.com>
The PMD allows for DPDK and the host to communicate using a raw
device interface on the host and in the DPDK application. The device
created is a Tap device with a L2 packet header.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Aws Ismail <aismail@ciena.com>
Tested-by: Vasily Philipov <vasilyf@mellanox.com>
Enable the PMD by default on supported configurations.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added API for `rte_eth_tx_prepare`
uint16_t rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
Added fields to the `struct rte_eth_desc_lim`:
uint16_t nb_seg_max;
/**< Max number of segments per whole packet. */
uint16_t nb_mtu_seg_max;
/**< Max number of segments per one MTU */
These fields can be used to create valid packets according to the
following rules:
* For non-TSO packet, a single transmit packet may span up to
"nb_mtu_seg_max" buffers.
* For TSO packet the total number of data descriptors is "nb_seg_max",
and each segment within the TSO may span up to "nb_mtu_seg_max".
Added functions:
int
rte_validate_tx_offload(struct rte_mbuf *m)
to validate general requirements for tx offload set in mbuf of packet
such a flag completness. In current implementation this function is
called optionaly when RTE_LIBRTE_ETHDEV_DEBUG is enabled.
int rte_net_intel_cksum_prepare(struct rte_mbuf *m)
to prepare pseudo header checksum for TSO and non-TSO tcp/udp packets
before hardware tx checksum offload.
- for non-TSO tcp/udp packets full pseudo-header checksum is
counted and set.
- for TSO the IP payload length is not included.
int
rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
this function uses same logic as rte_net_intel_cksum_prepare, but
allows application to choose which offloads should be taken into
account, if full preparation is not required.
PERFORMANCE TESTS
-----------------
This feature was tested with modified csum engine from test-pmd.
The packet checksum preparation was moved from application to Tx
preparation step placed before burst.
We may expect some overhead costs caused by:
1) using additional callback before burst,
2) rescanning burst,
3) additional condition checking (packet validation),
4) worse optimization (e.g. packet data access, etc.)
We tested it using ixgbe Tx preparation implementation with some parts
disabled to have comparable information about the impact of different
parts of implementation.
IMPACT:
1) For unimplemented Tx preparation callback the performance impact is
negligible,
2) For packet condition check without checksum modifications (nb_segs,
available offloads, etc.) is 14626628/14252168 (~2.62% drop),
3) Full support in ixgbe driver (point 2 + packet checksum
initialization) is 14060924/13588094 (~3.48% drop)
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There was an option CONFIG_RTE_INSECURE_FUNCTION_WARNING (disabled by
default), which prevents from using some libc functions:
sprintf, snprintf, vsnprintf, strcpy, strncpy, strcat, strncat, sscanf,
strtok, strsep and strlen.
It's all about using them at the right place with the right precautions.
However, it is neither really possible nor a good advice to disable them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Today, all logs whose level is lower than INFO are dropped at
compile-time. This prevents from enabling debug logs at runtime using
--log-level=8.
The rationale was to remove debug logs from the data path at
compile-time, avoiding a test at run-time.
This patch changes the behavior of RTE_LOG() to avoid the compile-time
optimization, and introduces the RTE_LOG_DP() macro that has the same
behavior than the previous RTE_LOG(), for the rare cases where debug
logs are in the data path.
So it is now possible to enable debug logs at run-time by just
specifying --log-level=8. Some drivers still have special compile-time
options to enable more debug log. Maintainers may consider to
remove/reduce them.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
- Fix to use bitmapped values in NVM configuration for speed capability
advertisement. This issue is specific to 25G NIC since it is capable
of 25G and 10G speeds.
- Update feature list.
Fixes: 64c239b7f8 ("net/qede: fix advertising link speed capability")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
This patch replaces name "libcrypto" to "openssl" from file directories,
symbol prefixes and sub-names connected with old name.
Renamed poll mode driver files, test files, and documentations.
It is done to better name association with library because
the cryptography operations are using Openssl library crypto API.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The QEDE PMD now uses unzipped firmware file eliminating the dependency
on zlib. Hence remove LDLIBS entry form the Makefile and enable qede
PMD by default.
Fixes: 6adac0bf30 ("qede: add missing external dependency and disable by default")
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Use ARM NEON intrinsic to implement i40e vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since switched to kernel dynamic debugging it is possible to remove
compile time debug log configuration.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
This code provides the initial implementation of the libcrypto
poll mode driver. All cryptography operations are using Openssl
library crypto API. Each algorithm uses EVP_ interface from
openssl API - which is recommended by Openssl maintainers.
This patch adds libcrypto poll mode driver support to librte_cryptodev
library.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms ZUC EEA3 and EIA3
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
- RTE_CRYPTO_SYM_AUTH_ZUC_EIA3
The ZUC hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
And add read memory barrier to avoid status inconsistency
between two Rx descriptors readings.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
remove vhost-cuse code, including the eventfd_link kernel module that
is for vhost-cuse only.
The lib/virt/qemu-wrap.py is also removed, as it's mainly for vhost-cuse
usage.
As we have one vhost implementation now, one vhost config option is
needed only. Thus, CONFIG_RTE_LIBRTE_VHOST_USER is removed.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds port for ACL library in ppc64le.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds ppc64le port for LPM library in DPDK.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Following discussions on the mailing list [1] and since nobody stood up to
implement the necessary cleanups, here is the ivshmem integration removal.
There is not much to say about this patch, a lot of code is being removed.
The default configuration file for packet_ordering example is replaced with
the "native" x86 file.
The only tricky part is in eal_memory with the memseg index stuff.
More cleanups can be done after this but will come in subsequent patchsets.
[1]: http://dpdk.org/ml/archives/dev/2016-June/040844.html
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
With current config structure, all configuration parameters put into
common_base with a default value, and overwritten in environment file
if required, CONFIG_RTE_VIRTIO_USER is missing in common_base.
This fix is simple, by adding CONFIG_RTE_VIRTIO_USER=n as the default
macro value.
Fixes: ce2eabdd43 ("net/virtio-user: add virtual device")
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Inline TX will be fully managed by the PMD after Verbs is bypassed in the
data path. Remove the current code until then.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no scatter/gather support anymore, CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N
has no purpose and can be removed.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
- Add device id to the PCI table
- Add polling for the slowpath events for CMT mode device
- Add prerequisites to allow 100g mode
* Min number of queues needed is 2
* Only even number of queues are allowed
- Update documentation
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.
By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <kamil.rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
This patch adds the initial skeleton for bnxt driver along with the
nic guide, and ties the driver into the build system.
At this point, the driver simply fails init.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
[Release Note Addition]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.
Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The librte_pdump library provides a framework for
packet capturing in dpdk. The library provides set of
APIs to initialize the packet capture framework, to
enable or disable the packet capture, and to uninitialize
it.
The librte_pdump library works on a client/server model.
The server is responsible for enabling or disabling the
packet capture and the clients are responsible
for requesting the enabling or disabling of the packet
capture.
Enabling APIs are supported with port, queue, ring and
mempool parameters. Applications should pass on this information
to get the packets from the dpdk ports.
For enabling requests from applications, library creates the client
request containing the mempool, ring, port and queue information and
sends the request to the server. After receiving the request, server
registers the Rx and Tx callbacks for all the port and queues.
After the callbacks registration, registered callbacks will get the
Rx and Tx packets. Packets then will be copied to the new mbufs that
are allocated from the user passed mempool. These new mbufs then will
be enqueued to the application passed ring. Applications need to dequeue
the mbufs from the rings and direct them to the devices like
pcap vdev for viewing the packets outside of the dpdk
using the packet capture tools.
For disabling requests, library creates the client request containing
the port and queue information and sends the request to the server.
After receiving the request, server removes the Rx and Tx callback
for all the port and queues.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The commit 66819e6 has introduced a dependency on libarchive to be able
to use some tar resources in the unit tests.
It is now an optional dependency because some systems do not have it
installed.
If CONFIG_RTE_APP_TEST_RESOURCE_TAR is disabled, the PCI test will not
be run. When a "configure" script will be integrated, the libarchive
availability could be checked to automatically enable the option.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
The qede driver depends on libz but the LDLIBS entry in makefile
was missing. Also because of the external dependency, make it
disabled in default config as per common DPDK policy on external deps.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch introduces dpaa2 machine target to address difference
in cpu parameter, number of core to 8 and no numa support
w.r.t default armv8-a machine
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
IGB_UIO not supported for arm64 arch in kernel so disable.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The qede PMD driver is failing when building for ARMv7:
drivers/net/qede/base/ecore_dev.c:1150:6:
error: variable ‘prs_reg’ set but not used
[...]
drivers/net/qede/base/ecore_dev.c:2492:13:
error: variable ‘p_phys’ set but not used
[...]
drivers/net/qede/base/ecore_dev.c:2532:39:
error: variable ‘pbl_size’ set but not used
Fixes: 3eae93a9bf ("qede: enable PMD build")
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
The default was to compile every logs (including debug) and set
the default level to debug.
As some debug logs may hurt performance, a notice is added and the
default level is now info.
In order to enable debug logs, they must be compiled with
RTE_LOG_LEVEL=RTE_LOG_DEBUG and enabled at runtime with --log-level=8.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The driver i40e was using a specific PCI config before the release 16.04.
Since 16.04, it is always enabled in i40e (commit 56465cfaf).
The API has been deprecated in the commit 68f7759382.
The igb_uio implementation has been deprecated in commit b7cf8e155.
The config helper - through igb_uio sysfs entries - is now removed.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
The cryptodev API was introduced in the DPDK 2.2 release.
Since then it has
- been reviewed and iterated for the DPDK 16.04 release
- had extensive use by the l2fwd-crypto app,
the ipsec-secgw example app,
the test app.
We believe it is now stable and the EXPERIMENTAL label should be removed.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch fixes the max logic number and memory channel number settings
on IBM POWER8 platform.
1. The max number of logic cores of a POWER8 processor is 96. Normally,
there are two sockets on a server. So the max number of logic cores
are 192. So this parch set CONFIG_RTE_MAX_LCORE to 256.
2. The socket number on POWER8 little endian platform can be larger than 16.
This patch set CONFIG_RTE_MAX_NUMA_NODES to 32 for POWER8.
3. Currently, the max number of memory channels are hardcoded to 4. However,
on a POWER8 machine, the max number of memory channels are 8. This patch
removes the constraint.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Previously, vector driver is not the first (default) choice for i40e,
as it cannot fill packet type info for l3fwd to work well. Now there
is an option for l3fwd to analysis packet type softly. So enable it
by default.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The new flag CONFIG_RTE_ARCH_ARM_NEON_MEMCPY is used to enable memcpy
optimizations in EAL.
As it is not always the performance benefit, the feature is disabled.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
ARMv7 machines have usually the NEON available.
Customization of the -mfpu=neon must be done by hand or by defining
another machine rte.vars.mk.
So, the CONFIG_RTE_ARCH_ARM_NEON is useless (and confusing).
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
The CROSS variable has empty default value (for native) and
must be set when using a cross-toolchain.
Signed-off-by: Liming Sun <lsun@ezchip.com>
Acked-by: Zhigang Lu <zlu@ezchip.com>
Originally, source ports in librte_port is an input port used as packet
generator. Similar to Linux kernel /dev/zero character device, it
generates null packets. This patch adds optional PCAP file support to
source port: instead of sending NULL packets, the source port generates
packets copied from a PCAP file. To increase the performance, the packets
in the file are loaded to memory initially, and copied to mbufs in circular
manner. Users can enable or disable this feature by setting
CONFIG_RTE_PORT_PCAP compiler option "y" or "n".
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Enabled CONFIG_RTE_LIBRTE_LPM, CONFIG_RTE_LIBRTE_TABLE,
CONFIG_RTE_LIBRTE_PIPELINE libraries for arm and arm64
TABLE, PIPELINE libraries were disabled due to LPM library dependency.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Parse the device parameters from rte_eal_vdev_init,
instead of the config file, so user can change the parameters
at runtime.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch provides the implementation of a NULL crypto PMD, which supports
NULL cipher and NULL authentication operations, which can be chained together
as follows:
- Authentication Only
- Cipher Only
- Authentication then Cipher
- Cipher then Authentication
As this is a NULL operation device the crypto operations which are submitted for
processing are not actually modified and are stored in a queue pairs processed
packets ring ready for collection when rte_cryptodev_burst_dequeue() is called.
The patch also contains the related unit tests function to test the PMDs
supported operations.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
This patch provides the implementation of an AES-NI accelerated crypto PMD
which is dependent on Intel's multi-buffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors"
This PMD supports AES_GCM authenticated encryption and authenticated
decryption using 128-bit AES keys
The patch also contains the related unit tests functions
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_AUTH_SNOW3G_UIA2
The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)
The patch also contains the related unit tests function to test the PMD
supported operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
As cryptodev library does not depend on mbuf_offload library
any longer, this patch removes it.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Remove pci configuration of 'extended tag' and 'max read request
size', as they are not required by all devices and it lets PMD to
configure them if necessary.
In addition, 'pci_config_space_set()' is deprecated.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
CONFIG_RTE_LIBRTE_EAL_*APP can be replaced by CONFIG_RTE_EXEC_ENV_*APP.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
In order to cleanup the configuration files some and reduce
the number of duplicate configuration information. Add a new
file called common_base which contains just about all of the
configuration lines in one place. Then have the common_bsdapp,
common_linuxapp files include this one file. Then in those OS
specific files add the delta configuration lines.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Until now, the generic 64-bit flag was used only for ARM or Linux,
and was not defined for BSD environment.
Fixes: d05e7115f4 ("mem: support layout of IBM Power")
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
removed _VIRTIO_PMD=n from arch config and let arch to use _VIRTIO_PMD
from config/common_linuxapp.
Signed-off-by: Santosh Shukla <sshukla@mvista.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The physically linked-together combined library has been an increasing
source of problems, as was predicted when library and symbol versioning
was introduced. Replace the complex and fragile construction with a
simple linker script which achieves the same without all the problems,
remove the related kludges from eg mlx drivers.
Since creating the linker script is practically zero cost, remove the
config option and just create it always.
Based on a patch by Sergio Gonzales Monroy, linker script approach
initially suggested by Neil Horman.
Suggested-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Suggested-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch reduces number of warnings from 53 to 40.
It removes the usual false positives utilizing unaligned_uint*_t data types.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
The CONFIG_RTE_MACHINE must not contain hyphens to work correctly. This was
initially done only for the file name defconfig_arm-armv7a-linuxapp-gcc. This
patch fixes install-sdk goal. Otherwise, it creates a wrong directory for this
platform.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
by default, all the targets will be configured with the 64-byte cache line
size, targets which have different cache line size can be overridden
through target specific config file.
Selected ThunderX and power8 as CONFIG_RTE_CACHE_LINE_SIZE=128 targets
based on existing configuration.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Intel Architecture (IA), also called x86, is declined in
- i686
- x86_x32
- x86_64
The code common to all of these architectures can now be guarded
by a single flag RTE_ARCH_X86.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
More and more machines and architectures are added without keeping
the lists up-to-date.
Replace the lists with a pointer to the reference directory.
The same kind of pointer is used for the supported compilers and environments.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The periodic debug option is used to collect periodic
events like statistics, register access etc and won't
interfere with user-level messages.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Cryptodev was marked experimental and mbuf_offload depends on it.
The mbuf_offload library is one of the crypto area which requires
some discussions before having a stable API.
The experimental mark is also added to rte_cryptodev_configure()
to be sure one cannot miss it.
Fixes: 66874e55f5 ("cryptodev: mark experimental state")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Implement vqtbl1q_u8 intrinsic function, which is not supported in armv7-a.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
As it causes issues when building with RTE_MACHINE=default due to SSE4.x
requirements and in other discussions was so far rated "lightly tested and
doesn't provide really significant performance improvement" let us disable
that in the default config.
(=> http://dpdk.org/ml/archives/dev/2015-November/029067.html)
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Commit 42ec27a017 causes compiling error on arm, as RTE_SCHED_VECTOR
does support only SSE intrinsic, so disable it till we have neon support.
Fixes: 42ec27a017 ("sched: enable SSE optimizations in config")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jan Viktorin <viktorin@rehivetech.com>
let each armv8 machine targets capture only the differences
between the common defconfig_arm64-armv8a-linuxapp-gcc
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
created the new xgene1 machine target to address the difference
in optional armv8-a CRC extension availability compared to
default armv8-a machine target(enabled CRC extension by default)
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The crypto API is in an early state.
It requires more discussions and experiments to declare it stable,
as discussed in http://dpdk.org/ml/archives/dev/2015-November/028634.html
A documentation section will be required in the guides.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch provides the initial implementation of the AES-NI multi-buffer
based crypto poll mode driver using DPDK's new cryptodev framework.
This PMD is dependent on Intel's multibuffer library, see the whitepaper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture
Processors", see ref 1 for details on the library's design and ref 2 to
download the library itself. This initial implementation is limited to
supporting the chained operations of "hash then cipher" or "cipher then
hash" for the following cipher and hash algorithms:
Cipher algorithms:
- RTE_CRYPTO_CIPHER_AES_CBC (with 128-bit, 192-bit and 256-bit keys supported)
Authentication algorithms:
- RTE_CRYPTO_AUTH_SHA1_HMAC
- RTE_CRYPTO_AUTH_SHA256_HMAC
- RTE_CRYPTO_AUTH_SHA512_HMAC
- RTE_CRYPTO_AUTH_AES_XCBC_MAC
Important Note:
Due to the fact that the multi-buffer library is designed for
accelerating IPsec crypto operation, the digest's generated for the HMAC
functions are truncated to lengths specified by IPsec RFC's, ie RFC2404
for using HMAC-SHA-1 with IPsec specifies that the digest is truncate
from 20 to 12 bytes.
Build instructions:
To build DPDK with the AESNI_MB_PMD the user is required to download
(ref 2) and compile the multi-buffer library on there system before
building DPDK. The environmental variable AESNI_MULTI_BUFFER_LIB_PATH
must be exported with the path where you extracted and built the multi
buffer library and finally set CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y in
config/common_linuxapp.
Current status: It's doesn't support crypto operation
across chained mbufs, or cipher only or hash only operations.
ref 1:
https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/fast-multi-buffer-ipsec-implementations-ia-processors-p
ref 2: https://downloadcenter.intel.com/download/22972
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
This patch adds a PMD for the Intel Quick Assist Technology DH895xxC
hardware accelerator.
This patch depends on a QAT PF driver for device initialization. See
the file docs/guides/cryptodevs/qat.rst for configuration details
This patch supports a limited subset of QAT device functionality,
currently supporting chaining of cipher and hash operations for the
following algorithmsd:
Cipher algorithms:
- RTE_CRYPTO_CIPHER_AES_CBC (with 128-bit, 192-bit and 256-bit keys supported)
Hash algorithms:
- RTE_CRYPTO_AUTH_SHA1_HMAC
- RTE_CRYPTO_AUTH_SHA256_HMAC
- RTE_CRYPTO_AUTH_SHA512_HMAC
- RTE_CRYPTO_AUTH_AES_XCBC_MAC
Some limitation on this patchset which shall be contributed in a
subsequent release:
- Chained mbufs are not supported.
- Hash only is not supported.
- Cipher only is not supported.
- Only in-place is currently supported (destination address is
the same as source address).
- Only supports session-oriented API implementation (session-less
APIs are not supported).
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
This library add support for adding a chain of offload operations to a
mbuf. It contains the definition of the rte_mbuf_offload structure as
well as helper functions for attaching offloads to mbufs and a mempool
management functions.
This initial implementation supports attaching multiple offload
operations to a single mbuf, but only a single offload operation of a
specific type can be attach to that mbuf.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch contains the initial proposed APIs and device framework for
integrating crypto packet processing into DPDK.
features include:
- Crypto device configuration / management APIs
- Definitions of supported cipher algorithms and operations.
- Definitions of supported hash/authentication algorithms and
operations.
- Crypto session management APIs
- Crypto operation data structures and APIs allocation of crypto
operation structure used to specify the crypto operations to
be performed on a particular mbuf.
- Extension of mbuf to contain crypto operation data pointer and
extra flags.
- Burst enqueue / dequeue APIs for processing of crypto operations.
Signed-off-by: Des O Dea <des.j.o.dea@intel.com>
Signed-off-by: John Griffin <john.griffin@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
All #ifdefs in code should be enabled/disabled via DPDK config
(or better yet removed all together).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
fm10k driver will meet compile error on non-x86 platforms due to
SSE instructions. Original implementation didn't have switch to
turn off vPMD.
The improvement introduces a macro to turn on/off vPMD functions,
it's on by default. On non-x86 platforms, it can simply be turned
off to fix compile issue.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Issue: l3fwd app need the ptype in the mbuf to forward the packets properly.
But now some drivers like virtio driver and FVL vPMD will not set the ptype
in mbuf, so l3fwd cannot work properly on that kind of drivers.
Configure the vector PMD option as no for default as a work around for l3fwd.
After the l3fwd app can handle the undefined ptype or the i40e vPMD can
return the ptype, the option will be set as yes for default again.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add virtual PMD which communicates with COMBO cards through sze2
layer using libsze2 library.
Since link_speed is uint16_t, there can not be used number for 100G
speed, therefore link_speed is set to ETH_LINK_SPEED_10G until the
type of link_speed is solved.
Signed-off-by: Matej Vido <matejvido@gmail.com>
Commit 36080ff96b causes compiling error on tile, as tile
does not support KNI, so we disable the CONFIG_RTE_KNI_KMOD.
Fixes: 36080ff96b ("config: add KNI kmod option")
Reported-by: Guo Xin <gxin@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Created the new thunderx machine target to address difference
in "cache line size" and "-mcpu=thunderx" vs default armv8-a machine target
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Make DPDK run on ARMv7-A architecture. This patch assumes
ARM Cortex-A9. However, it is known to be working on Cortex-A7
and Cortex-A15.
Signed-off-by: Vlastimil Kosar <kosar@rehivetech.com>
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Add support for directories as arguments to -d for loading all drivers
from a given directory. Additionally a default driver directory can be
set in build-time configuration, in which case it will be always be used
when EAL is initialized.
This simplifies usage in shared library configuration significantly over
manually loading individual drivers with -d, and allows distros to
establish a drop-in driver directory for seamless integration
with 3rd party drivers etc.
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: David Marchand <david.marchand@6wind.com>
The fm10k vector driver is specific for x86 platform which can't compile
on IBM POWER for lacking of tmmintrin.h header file. This patch turns
off fm10k driver compilation on IBM POWER to prevent compile issue.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
RSS implementation with parent/child QPs comes from mlx4 and is temporary.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
In its current state, this driver implements the bare minimum to initialize
itself and Mellanox ConnectX-4 adapters without doing anything else
(no RX/TX for instance). It is disabled by default since it is based on the
mlx4 driver and also depends on libibverbs.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Or Ami <ora@mellanox.com>
The vPMD RX function uses the multi-buffer and SSE instructions to
accelerate the RX speed, but now the pktype cannot be supported by the vPMD RX,
because it will decrease the performance heavily.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
This option permit to build librte_kni.so without building rte_kni.ko
so you can build a sdk without building kernel drivers.
Signed-off-by: Nikita Kozlov <nikita@elyzion.net>
Enable vector ixgbe and i40e bulk alloc for bsd as it is
already done for linux.
Fixes: 304caba126 ("config: fix bsd options")
Fixes: 0ff3324da2 ("ixgbe: rework vector pmd following mbuf changes")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The RX_OLFLAGS option was renamed from DISABLE to ENABLE in driver code
and linux config.
It is now renamed also in bsd config and documentation.
Fixes: 359f106a69 ("ixgbe: prefer enabling olflags rather than not disabling")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This driver has too many issues:
- too big
- bad coding style
- no git history (dropped in 2 patches)
- no documentation
- no BSD support
- no maintainer
And the biggest one, constraining this disabling:
- many build issues
If the last 4 issues are not fixed in the next release 2.2,
the driver must be removed.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When reverting the max queues per port to fix an ABI breakage,
the BSD config was forgotten.
Fixes: 94c6cba001 ("config: revert the max queues per port to 256")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This is build infrastructure changes for bnx2x driver.
- enable BNX2X poll mode driver in default config.
- add it to mk
- put entry in MAINTAINERS
Note: I intentionally did not list myself as maintainer of this
driver. QLogic has discussed taking over as maintainer.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Harish Patil <harish.patil@qlogic.com>
RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC config option is not really
necessary, as bulk alloc rx function can be used anyway, as long as the
necessary conditions are satisfied, which are checked already
in the library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Fix "MACRO redefined" and "function redefined" compilation errors in FreeBSD
by adding CXGBE prefix to them. Also remove reference to a linux header
linux/if_ether.h and use DPDK macros directly. Finally, enable CXGBE PMD
for FreeBSD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In the current memory hierarchy, memsegs are groups of physically
contiguous hugepages, memzones are slices of memsegs and malloc further
slices memzones into smaller memory chunks.
This patch modifies malloc so it partitions memsegs instead of memzones.
Thus memzones would call malloc internally for memory allocation while
maintaining its ABI.
During initialization malloc sets all available memory as part of the heaps.
CONFIG_RTE_MALLOC_MEMZONE_SIZE was used to specify the default memory
block size to expand the heap. The option is not used/relevant anymore,
so we remove it.
Remove free_memseg field from internal mem config structure as it is
not used anymore.
Also remove code in ivshmem that was setting up free_memseg on init.
It would be possible to free memzones and therefore any other structure
based on memzones, ie. mempools
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Move malloc inside eal and create a new section in MAINTAINERS file for
Memory Allocation in EAL.
Create a dummy malloc library to avoid breaking applications that have
librte_malloc in their DT_NEEDED entries.
This is the first step towards using malloc to allocate memory directly
from memsegs. Thus, memzones would allocate memory through malloc,
allowing to free memzones.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
This commit adds a poll mode driver for the mPIPE hardware present on
TILE-Gx SoCs.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
This commit adds support for the TILE-Gx platform, as well as the TILE
CPU architecture. This architecture port is fairly simple due to its
reliance on generics for most arch stuff.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
The library name is now being pinned to "dpdk" instead of intel_dpdk,
powerpc_dpdk, etc. As a result, we no longer need this config item.
This patch removes it.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The previous commit changed the size and the offsets of struct rte_eth_dev,
so it is an ABI breakage.
I revert it, and will send a deprecation notice for this.
Fixes: 1a1109404e ("config: increase max queues per port")
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
When a change makes really hard to keep ABI compatibility,
instead of waiting next release to break the ABI, it is smoother
to introduce the new code as a preview and disable it when packaging.
The flag RTE_NEXT_ABI must be used to "ifdef" the new code.
When the release is out, a dynamically linked application can use
the new shared libraries with the old ABI while developpers can prepare
their application for the next ABI by reading the deprecation notice
and easily testing the new code.
When starting the next release cycle, the "ifdefs" will be removed
and the ABI break will be marked by incrementing LIBABIVER. The map
files will also be updated.
The default value is enabled to be developer compliant.
The packagers must disable it as done in pkg/dpdk.spec.
When enabled, all shared library numbers are incremented by appending
a minor .1 to the old ABI number. In the next release, only impacted
libraries will have a major +1 increment.
The impacted libraries must provide an alternative map file to use
with this option.
The ABI policy is updated.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
This patch removes CONFIG_RTE_LIBRTE_EAL_HOTPLUG option, and enables it
as default in both Linux and BSD.
Also, to support port hotplug, rte_eal_pci_scan() and below missing
symbols should be exported to ethdev library.
- rte_eal_parse_devargs_str()
- rte_eal_pci_close_one()
- rte_eal_pci_probe_one()
- rte_eal_pci_scan()
- rte_eal_vdev_init()
- rte_eal_vdev_uninit()
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Adds cxgbe poll mode driver for DPDK under drivers/net/cxgbe directory.
This patch:
1. Adds the Makefile to compile cxgbe pmd.
2. Registers and initializes the cxgbe pmd driver.
Enable cxgbe PMD for compilation and linking with changes to:
1. config/common_linuxapp to add macros for cxgbe pmd.
2. drivers/net/Makefile to add cxgbe pmd to the compile list.
3. mk/rte.app.mk to add cxgbe pmd to link.
Update MAINTAINERS file to claim responsibility for the cxgbe PMD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
[Thomas: add disabled config for bsdapp]
Previous vhost-cuse implementation requires fuse development package.
Now that we have vhost-user implementation, which is enabled by default
and doesn't require additional library to build, we could turn on vhost.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
When we get the address of vring descriptor table in VHOST_SET_VRING_ADDR
message, will try to reallocate vhost device and virt queue to the same
numa node.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
On machines that are strict on pointer alignment, current code breaks
on GCC's -Wcast-align checks on casts from narrower to wider types.
This patch introduces new unaligned_uint(16|32|64)_t types, which
correctly retain alignment in such cases. Strict alignment
architectures will need to define CONFIG_RTE_ARCH_STRICT_ALIGN in
order to effect these new types.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This patch adds statistics collection for librte_pipeline.
Those statistics are disabled by default during build time.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Added common data structures for port statistics.
Added config option to enable stats collecting.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The default value of RTE_MAX_QUEUES_PER_PORT is 256, which is too small
for some configurations for i40e. There will return an error when
configured queue number is larger than 256 in rte_eth_dev_configure().
For example, in vHost sample, PF queue number: 64,
configured vmdq pool number: 63, each vmdq pool has 4 queues,
there will be required 316 queues in a port.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The function name is printed in each enic_ethdev function.
Disable it by default with a new build option.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Sujith Sankar <ssujith@cisco.com>
Reviewed-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
CONFIG_RTE_LIBRTE_MLX4_COMPAT_VMWARE has no effect since this option enables
MLX4_PMD_COMPAT_VMWARE. This macro is not used by the PMD which expects
MLX4_COMPAT_VMWARE instead.
Because this option does not work and the related code is no longer useful
for VMware (as it actually supports the flow steering API), remove it
entirely.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
The data structure for the rx and tx callbacks is local to each process
since it contains function pointers and cannot be shared between
different unique binaries. However, because it is not in
rte_eth_dev_data structure, the array is not getting initialized for
secondary processes - neither is it getting appropriately resized if the
number of RX/TX queues changes. This causes crashes in secondary
processes as they dereference a null pointer in struct rte_eth_dev.
This patch fixes this by introducing an upper-bound on the number of
queues per port that can be configured, and then uses this to make the
array statically sized, thereby avoiding the crashes.
Fixes: 4dc294158c ("ethdev: support optional Rx and Tx callbacks")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Turn on CONFIG_RTE_LIBRTE_VHOST to enable vhost.
vhost-user is turned on by default. Turn off CONFIG_RTE_LIBRTE_VHOST_USER to
enable vhost-cuse implementation.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
Null PMD is a driver of the virtual device particularly designed to measure
performance of DPDK PMDs. When an application call rx, Null PMD just allocates
mbufs and returns those. Also tx, the PMD just frees mbufs.
The PMD has following options.
- size: specify packe size allocated by RX. Default packet size is 64.
- copy: specify 1 or 0 to enable or disable copy while RX and TX.
Default value is 0(disabled).
This option is used for emulating more realistic data transfer.
Copy size is equal to packet size.
To use the PMD, enable CONFIG_RTE_BUILD_SHARED_LIB in config file. Then
compile the PMD as shared library. The library can be linked using '-d'
option when an application invokes.
Here is an example.
$ sudo ./testpmd -c f -n 4 -d librte_pmd_null.so \
--vdev 'eth_null0' --vdev 'eth_null1' -- -i --no-flush-rx
If testpmd is compiled with CONFIG_RTE_BUILD_SHARED_LIB, it may need to
specify more libraries using '-d' option.
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
The patch adds functions for unmapping igb_uio resources. The patch is only
for Linux and igb_uio environment. VFIO and BSD are not supported.
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
This PMD manages all variants of Mellanox ConnectX-3 (EN 40, EN 10, Pro EN
40) as well as their virtual functions in SR-IOV context through IB Verbs
(libibverbs) and the dedicated user-space driver (libmlx4).
It is disabled by default due to dependencies on these libraries and only
supports Linux userland at the moment partly because /sys (sysfs) support is
required.
Also claim responsibility in the MAINTAINERS file.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
This library provide API to measure time spend in particular parts of
code and to calculate optimal polling time.
To calculate a those statistics application code need to be divided into
parts (called jobs) that do something. It is up to application to decide
what is considered a job.
Series of jobs must be surrounded with the rte_jobstats_context_start()
and rte_jobstats_context_finish() calls. After that, jobs might be
started. Each job must be surrounded with rte_jobstats_start() and
rte_jobstats_finish() calls.
After job finishes its execution, period in which it should be called
again is adjusted. It might be used to minimize time wasted on
unnecessary polls/calls. Adjustment is based on data provided by job
itself (ex: number of packets it processed).
After all jobs in serie are executed fallowing statistics are updated
and might be used by application. Statistics can be reset. Some of
provided statistic data:
- total/min/max execution - time spent in executing jobs.
- total/min/max management - time spent outside execution area. This
value might be used to measure overhead of scheduling jobs. This time
also contains overhead of rte_jobstats library itself.
- number of loops that executed at least one job
- executed jobs
- time when statistics were reset.
Each job provide total/min/max execution time and execution count
statistics.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add a sched_yield() syscall if the thread spins for too long,
waiting other thread to finish its operations on the ring.
That gives pre-empted thread a chance to proceed and finish
with ring enqueue/dequeue operation.
The purpose is to reduce contention on the ring.
By ring_perf_test, it doesn't shows additional perf penalty.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add optional support for inline processing of packets inside the RX
or TX call. For an RX callback, what happens is that we get a set of
packets from the NIC and then pass them to a callback function, if
configured, to allow additional processing to be done on them, e.g.
filling in more mbuf fields, before passing back to the application.
On TX, the packets are similarly post-processed before being handed
to the NIC for transmission.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch removes all references to RTE_MBUF_REFCNT, setting the refcnt
field in the mbuf struct permanently.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch introduces CONFIG_RTE_KNI_PREEMPT_DEFAULT flag. When set to 'no',
KNI kernel thread(s) do not call schedule_timeout_interruptible(), which
improves overall KNI performance at the expense of CPU cycles (polling).
Default values is 'yes', maintaining the same behaviour as of now.
Signed-off-by: Marc Sune <marc.sune@bisdn.de>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch add some debug information when using link bonding mode 6.
It prints basic information about ARP packets on RX and TX (MAC, ip,
packet number, arp packet type).
If CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB == y.
If CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB_L1 is enabled instead of previous
one, use show command to see IPv4 balancing from clients.
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
x32 ABI provides benefits of x86-64 while using 32-bit pointers and
avoiding overhead of 64-bit pointers.
Test report: http://dpdk.org/ml/archives/dev/2015-February/012599.html
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Tested-by: Haifeng Tang <haifengx.tang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This library provides reordering capability for out of order mbufs based
on a sequence number in the mbuf structure.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Signed-off-by: Richardson Bruce <bruce.richardson@intel.com>
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
1. Add init function to scan and initialize fm10k PF device.
2. Add implementation to register fm10k pmd PF driver.
3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and
fm10k_stats_get.
4. Add fm10k.h to define macros and basic data structure.
5. Add fm10k_logs.h to control log message output.
6. Change config/common_bsdapp and config/common_linuxapp, add
macros to control fm10k pmd driver compile for linux and bsd.
7. Add Makefile.
8. Change lib/Makefile to add fm10k driver into compile list.
9. Change mk/rte.app.mk to add fm10k lib into link.
10. Add ABI version of librte_pmd_fm10k
Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com>
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
This is a duplication of some EAL parts for a standalone packaging
which is not documented.
Packaging should be done outside of DPDK.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This allows the PMD to compile with kernels that don't support the
options in question. The "#if defined(...)" lines are a bit ugly,
but I don't know of any better way to accomplish the task.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
There is no standard to check endianness.
So we need to try different checks.
Previous trials were done in testpmd (see commits
51f694dd40 and 64741f237c) without full success.
This one is not guaranteed to work everywhere so it could
evolve when exceptions are found.
If endianness is not detected, there is a fallback on x86
to little endian. It could be forced before doing detection
but it would add some arch-dependent code in the generic header.
The option CONFIG_RTE_ARCH_BIG_ENDIAN introduced for IBM Power only
(commit a982ec81d8) can be removed. A compile-time check is better.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
enic driver is giving trouble because of non-standard types :
CC enic_res.o
In file included from
lib/librte_pmd_enic/enic_res.c:36:0:
lib/librte_pmd_enic/enic_compat.h:92:1: error: unknown type name ‘u_int32_t’
static inline u_int32_t ioread32(volatile void *addr)
^
Disable it on Power for now.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
[Thomas: enable for BSD - not tested]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The mmap of hugepage files on IBM Power starts from high address to low
address. This is different from x86. This patch modified the memory
segment detection code to get the correct memory segment layout on Power
architecture. This patch also added a commond ARCH_PPC_64 definition for
64 bit systems.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: David Marchand <david.marchand@6wind.com>
This patch adds architecture specific byte order operations for IBM Power
architecture. Power architecture support both big endian and little
endian. This patch also adds a RTE_ARCH_BIG_ENDIAN micro.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: David Marchand <david.marchand@6wind.com>
To make DPDK run on IBM Power architecture, configuration files for
Power architecuture are added. Also, the compiling related .mk files are
added.
Signed-off-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Acked-by: David Marchand <david.marchand@6wind.com>
New platforms have more than 64 cores.
Set default max cores number to 128.
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This is a Linux-specific virtual PMD driver backed by an AF_PACKET
socket. This implementation uses mmap'ed ring buffers to limit copying
and user/kernel transitions. The PACKET_FANOUT_HASH behavior of
AF_PACKET is used for frame reception. In the current implementation,
Tx and Rx queues are always paired, and therefore are always equal
in number -- changing this would be a Simple Matter Of Programming.
Interfaces of this type are created with a command line option like
"--vdev=eth_af_packet0,iface=...". There are a number of options available
as arguments:
- Interface is chosen by "iface" (required)
- Number of queue pairs set by "qpairs" (optional, default: 1)
- AF_PACKET MMAP block size set by "blocksz" (optional, default: 4096)
- AF_PACKET MMAP frame size set by "framesz" (optional, default: 2048)
- AF_PACKET MMAP frame count set by "framecnt" (optional, default: 512)
Signed-off-by: John W. Linville <linville@tuxdriver.com>
[Thomas: disable because of incompatibility with some kernels]
Remove 'CONFIG_RTE_LIBRTE_I40E_PF_DISABLE_STRIP_CRC'
from config files, as nowhere uses it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The change includes several parts:
1. Get maximum number of VMDQ pools supported in dev_init.
2. Fill VMDQ info in i40e_dev_info_get.
3. Setup VMDQ pools in i40e_dev_configure.
4. i40e_vsi_setup change to support creation of VMDQ VSI.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Tested-by: Min Cao <min.cao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
vhost lib is turned off by default.
vhost lib is based on cuse, which requires fuse development package
to be installed.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: fix build dependencies]
No need to restrict usage of non Intel SFP.
If (hw->phy.type == ixgbe_phy_sfp_intel) is false,
a warning will be logged.
It was disabled for ixgbe and enabled but unused for i40e.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The vector PMD expects fields to be in a specific order so that it can
do vector operations on multiple fields at a time. Following mbuf
rework, adjust driver to take account of the new layout and re-enable it
in the config.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
never longer than 64KB. We gain 6 bytes in the structure thanks to
this modification.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated to apply to latest on mainline.
* Disabled vector PMD in config as it relies heavily on the mbuf layout
This will be re-enabled in a subsequent commit once vPMD has been
reworked to take account of mbuf changes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
It seems that RTE_MBUF_SCATTER_GATHER is not the proper name for the
feature it provides. "Scatter gather" means that data is stored using
several buffers. RTE_MBUF_REFCNT seems to be a better name for that
feature as it provides a reference counter for mbufs.
The macro RTE_MBUF_SCATTER_GATHER is poisoned to ensure this
modification is seen by drivers or applications using it.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>