Introduce a new function to read the packet data from an mbuf chain. It
linearizes the data if required, and also ensures that the mbuf is large
enough.
This function is used in next commits that add a software parser to
retrieve the packet type.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
If a pci probe operation creates a port but, for any reason, fails to
finish this operation and decides to delete the newly created port, then
the last created port id can not be trusted anymore and any subsequent
attach operations will fail.
This problem was noticed while working on a vm that had a virtio-net
management interface bound to the virtio-net kernel driver and no port
whitelisted in the commandline:
root@ubuntu1404:~/dpdk# ./build/app/testpmd -c 0x6 --
-i --total-num-mbufs=2049
EAL: Detected 3 lcore(s)
EAL: Probing VFIO support...
EAL: Debug logs available - lower performance
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
unreliable clock cycles !
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 (null)
rte_eth_dev_pci_probe: driver (null): eth_dev_init(vendor_id=0x6900
device_id=0x1000) failed
EAL: No probed ethernet devices
^
|
Here, rte_eth_dev_pci_probe() fails since vtpci_init() reports an
error. This results in a rte_eth_dev_release_port() right after a
rte_eth_dev_allocate().
Then, if we try to attach a port using rte_eth_dev_attach:
testpmd> port attach net_ring0
Attaching a new port...
PMD: Initializing pmd_ring for net_ring0
PMD: Creating rings-backed ethdev on numa socket 0
Two solutions:
- either update the last created port index to something invalid
(when freeing a ethdev port),
- or rely on the port count, before and after the eal attach.
The latter solution seems (well not really more robust but at least)
less fragile than the former.
We still have some issues with drivers that create multiple ethdev
ports with a single probe operation, but this was already the case.
Fixes: b0fb266855 ("ethdev: convert to EAL hotplug")
Reported-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
To enable Tx side offload on tunneling packet, driver should set
correct tunneling parameters: (1) EIPT, External IP header type;
(2) EIPLEN, External IP; (3) L4TUNT; (4) L4TUNLEN. This parsing
behavior is based on (ol_flag & PKT_TX_TUNNEL_MASK). And when
it's a tunneling packet, MACLEN defines the outer L2 header.
Also, we define TSO on each kind of tunneling type as a capabilities.
Now only i40e declares to support them.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To support tunneling packet offload capabilities on Tx side, PMDs
(e.g., i40e) need to know what kind of tunneling type of this packet.
Instead of analyzing the packet itself, we depend on applications to
correctly set the tunneling type. These flags are defined inside
rte_mbuf.ol_flags.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This commit fixes following build error, which happens in SUSE 11 SP2,
with gcc 4.5.1:
In file included from lib/librte_cryptodev/rte_cryptodev.c:70:0:
lib/librte_cryptodev/rte_cryptodev.h:772:7:
error: flexible array member in otherwise empty struct
Fixes: 347a1e037f ("lib: use C99 syntax for zero-size arrays")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This code provides the initial implementation of the libcrypto
poll mode driver. All cryptography operations are using Openssl
library crypto API. Each algorithm uses EVP_ interface from
openssl API - which is recommended by Openssl maintainers.
This patch adds libcrypto poll mode driver support to librte_cryptodev
library.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms ZUC EEA3 and EIA3
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_ZUC_EEA3
- RTE_CRYPTO_SYM_AUTH_ZUC_EIA3
The ZUC hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
KASUMI algorithm has all uppercase letters,
but some references of it had some lowercase letters.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
SNOW 3G algorithm has all uppercase letters in its name
and a space between SNOW and 3G, but some references of it
had some lowercase letters or no space.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
In file rte_crypto_sym.h, GMAC API comments need to be changed
to comply with the GMAC specification. Main areas of change are
aad pointer and aad len, which now will be used to
provide plaintext.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Inline with PCI probe and remove, VDEV probe and remove hooks provide
a uniform naming.
PCI probe represents scan and driver initialization. For VDEV, it will
represent argument parsing and initialization.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
When compiled with EXTRA_CFLAGS="-O1", the compiler is not
able to detect that size is always initialized when used, and
issues a wrong warning:
eal_memory.c: In function ‘rte_eal_hugepage_attach’:
eal_memory.c:1684:3: error: ‘size’ may be used uninitialized in this
function [-Werror=maybe-uninitialized]
munmap(hp, size);
^
Workaround this issue by initializing size to 0.
Seen on gcc (Debian 5.4.1-1) 5.4.1 20160803.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Not sure what exactly changed and where, but I've started getting
build failures on Fedora rawhide i386:
lib/librte_ip_frag/ip_frag_internal.c:36:23: fatal error:
rte_jhash.h: No such file or directory
#include <rte_jhash.h>
^
Looking at librte_ip_frag, it clearly depends on librte_hash so
its probably more a question of something commonly masking the issue.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Some application use rte_mbuf_raw_alloc() function to improve
performance by not resetting mbuf's fields to their default state.
This can be however problematic for mbuf consumers that need some
headroom, meaning that data_off field gets decremented after
allocation. When the mbuf is re-used afterwards, there might not
be enough room for the consumer to prepend anything, if the data_off
field is not reset to its default value.
This patch adds a new rte_pktmbuf_reset_headroom() function that
applications can call to reset the data_off field.
This patch also replaces current data_off affectations in the mbuf
lib with a call to this function.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
On error, the mempool object has to be freed, and rte_errno should be a
positive value.
Fixes: 152ca51790 ("mbuf: use default mempool handler from config")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This patch replaces the pipelined rte_hash lookup mechanism with a
loop-and-jump model, which performs significantly better,
especially for smaller table sizes and smaller table occupancies.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
In lookup bulk function, the signatures of all entries
are compared against the signature of the key that is being looked up.
Now that all the signatures are together, they can be compared
with vector instructions (SSE, AVX2), achieving higher lookup performance.
Also, entries per bucket are increased to 8 when using processors
with AVX2, as 256 bits can be compared at once, which is the size of
8x32-bit signatures.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
Move current signatures of all entries together in the bucket
and same with all alternative signatures, instead of having
current and alternative signatures together per entry in the bucket.
This will be benefitial in the next commits, where a vectorized
comparison will be performed, achieving better performance.
The alternative signatures have been moved away from
the current signatures, to make the key indices be consecutive
to the current signatures, as these two fields are used by lookup,
so they are in the same cache line.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
In order to optimize lookup performance, hash structure
is reordered, so all fields used for lookup will be
in the first cache line.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
For periodic timers, if the lag gets introduced, the current code
added additional delay when the next peridoc timer was initialized
by not taking into account the delay added, with this fix the code
would start the next occurrence of timer keeping in account the
lag added. Corrected the behavior.
Fixes: 9b15ba89 ("timer: use a skip list")
Signed-off-by: Karmarkar Suyash <skarmarkar@sonusnet.com>
Acked-by: Robert Sanford <rsanford@akamai.com>
Running secondary is tricky due to the need to map the memory region
at the right place in VM, which is whatever primary has chosen. If the
base address for primary happens to by already mapped in the
secondary, we will hit precisely these error messages (depending if we
fail on the config region or the hugepages). This is why there is
already a comment about ASLR.
The issue is that in most cases, remapping does not happen and "errno"
is not changed and therefore stale. In our case, we got a "permission
denied", which sent us down the wrong track. It's such a common error
for secondary that I feel this error message should be unambiguous and
helpful.
The call to close was also moved because close() may override errno.
Signed-off-by: Jean Tourrilhes <jt@labs.hpe.com>
When compiling with C++, it treats
void (*rte_delay_us)(unsigned int us);
as definition of the global variable.
So further linking with librte_eal fails.
Fixes: b4d63fb622 ("eal: customize delay function")
Steps to reproduce:
$ cat rttm1.cpp
using namespace std;
int main(int argc, char *argv[])
{
int ret = rte_eal_init(argc, argv);
rte_delay_us(1);
cout << "return code ";
cout << ret;
return ret;
}
$ g++ -m64 -I/${RTE_SDK}/${RTE_TARGET}/include -c -o rttm1.o rttm1.cpp
$ gcc -m64 -pthread -o rttm1 rttm1.o -ldl -Wl,-lstdc++ \
-L/${RTE_SDK}/${RTE_TARGET}/lib -Wl,-lrte_eal
.../librte_eal.a(eal_common_timer.o):
(.bss+0x0): multiple definition of `rte_delay_us'
rttm1.o:(.bss+0x0): first defined here
collect2: error: ld returned 1 exit status
$ nm rttm1.o | grep rte_delay_us
0000000000000092 t _GLOBAL__sub_I_rte_delay_us
0000000000000000 B rte_delay_us
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Indirect descriptors are usually supported by virtio-net devices,
allowing to dispatch a larger number of requests.
When the virtio device sends a packet using indirect descriptors,
only one slot is used in the ring, even for large packets.
The main effect is to improve the 0% packet loss benchmark.
A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd
(fwd io for host, macswap for VM) on DUT shows a +50% gain for
zero loss.
On the downside, micro-benchmark using testpmd txonly in VM and
rxonly on host shows a loss between 1 and 4%. But depending on
the needs, feature can be disabled at VM boot time by passing
indirect_desc=off argument to vhost-user device in Qemu.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
As new_device and destroy_device use an int instead of a
"struct virtio_net *", The comment about setting VIRTIO_DEV_RUNNING
doesn't make sense anymore, plus If I've correctly understand the
code, the drivers take care of setting the flag before calling the
callbacks, so I guess that this comment is obsolet and I've remove it.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
No need to use a pointer to store/retrieve features.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Invoke get_device() at the beginning of vhost_user_msg_handler, so that
we could check the return value once. Which could save tons of duplicate
get-and-check device.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Some functions are with prefix "user_", while others with "vhost_".
Making them all starting with "vhost_user_" to unify the function names.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Due to history reason (that we have 2 vhost implementations), some
messages are handled in two calls: vhost specific implementation
handles it first and then invoke the common one to do another handling.
We have one implementation only now, we could write one method for
each message. Here fold those common handles to corresponding vhost
user handler.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The code structure is a bit messy now. For example, vhost-user message
handling is spread to three different files:
vhost-net-user.c virtio-net.c virtio-net-user.c
Where, vhost-net-user.c is the entrance to handle all those messages
and then invoke the right method for a specific message. Some of them
are stored at virtio-net.c, while others are stored at virtio-net-user.c.
The truth is all of them should be in one file, vhost_user.c.
So this patch refactors the source code structure: mainly on renaming
files and moving code from one file to another file that is more suitable
for storing it. Thus, no functional changes are made.
After the refactor, the code structure becomes to:
- socket.c handles all vhost-user socket file related stuff, such
as, socket file creation for server mode, reconnection
for client mode.
- vhost.c mainly on stuff like vhost device creation/destroy/reset.
Most of the vhost API implementation are there, too.
- vhost_user.c all stuff about vhost-user messages handling goes there.
- virtio_net.c all stuff about virtio-net should go there. It has virtio
net Rx/Tx implementation only so far: it's just a rename
from vhost_rxtx.c
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We now have one vhost implementation; no sub source dir is needed.
Remove it by move them to upper dir.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
remove vhost-cuse code, including the eventfd_link kernel module that
is for vhost-cuse only.
The lib/virt/qemu-wrap.py is also removed, as it's mainly for vhost-cuse
usage.
As we have one vhost implementation now, one vhost config option is
needed only. Thus, CONFIG_RTE_LIBRTE_VHOST_USER is removed.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In function rte_hash_cuckoo_insert_mw_tm, while looking for
an empty slot, only the first entry in the bucket was being checked,
as key_idx array was not being iterated.
Fixes: 5fc74c2e14 ("hash: check if slot is empty with key index")
Reported-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The "imissed" stats represent RX packets dropped by the HW,
so we should not talk about mbufs as the hardware is not aware
of this structure. Buffer seems to be a better word.
Fixes: 4eadb8ba11 ("ethdev: do not deprecate imissed counter")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Fixes: 85226f9c52 ("mempool: introduce a function to create an empty pool")
Fixes: d1d914ebbc ("mempool: allocate in several memory chunks by default")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Existing cntvct_el0 based rte_rdtsc() provides portable
means to get wall clock counter at user space. Typically
it runs at <= 100MHz.
The alternative method to enable rte_rdtsc() for high resolution
wall clock counter is through armv8 PMU subsystem.
The PMU cycle counter runs at CPU frequency, However,
access to PMU cycle counter from user space is not enabled
by default in the arm64 linux kernel.
It is possible to enable cycle counter at user space access
by configuring the PMU from the privileged mode (kernel space).
by default rte_rdtsc() implementation uses portable
cntvct_el0 scheme. Application can choose the PMU based
implementation with CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Fixes: dbe6b4b61b ("pci: probe or close device")
Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Now that rte_device is available, drivers can start using its members
(numa, name) as well as link themselves into another rte_device list.
As of now no one is using this list, but can be used for moving over all
devices (pdev/vdev/Xdev) and perform bulk actions (like cleanup).
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
[Shreyansh: Reword commit log for extra rte_device list]
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
To register both vdev and pci drivers into the list of all rte_driver,
we have to call rte_eal_driver_register explicitly.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Remove the 'name' member from rte_pci_driver and move to generic
rte_driver.
Most of the PMD drivers were initially using DRIVER_REGISTER_PCI(<name>..)
as well as assigning a name to eth_driver.pci_drv.name member.
In this patch, only the original DRIVER_REGISTER_PCI(<name>..) name has
been populated into the rte_driver.name member - assignments through
eth_driver has been removed.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
[Shreyansh: Rebase and expand changes to newly added files]
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
There is no need to have a custom memory resource representation for
each infrastructure (PCI, ...) as it would always have the same members.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Further refactoring and generalization of PCI infrastructure will
require access to the rte_dev.h contents.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
- All devices register themselfs by calling a kind of DRIVER_REGISTER_XXX.
The PMD_REGISTER_DRIVER is not used anymore.
- PMD_VDEV type is also not being used - can be removed from all VDEVs.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
All PMD_VDEV drivers can now use rte_vdev_driver instead of the
rte_driver (which is embedded in the rte_vdev_driver).
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
- Remove checks for VDEV from rte_eal_vdev_(init/uninint) as all devices
are inherently virtual here.
- PDEVs perform PCI specific inits - rte_eal_dev_init() need not call
rte_driver->init();
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
[Shreyansh: Reword commit log]
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Move all PMD_VDEV-specific code into a separate module and header
file to not polute the generic code anymore. There is now a list
of virtual devices available.
The rte_vdev_driver integrates the original rte_driver inside
(C inheritance). The rte_driver will be however change in the
future to serve as a common base for all other types of drivers.
The existing PMDs (PMD_VDEV) are to be modified later (there is
no change for them at the moment).
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: David Marchand <david.marchand@6wind.com>