This function was supposed to be inlined, but was not because several
functions calls it. This function should always be inline avoid
external function calls and to optimize code in data-path.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Packet rejection was routed to a polled queue. This patch route them to a
dummy queue which is not polled.
Fixes: 76f5c99e68 ("mlx5: support flow director")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This is done to prepare support for drop queues, which are not related to
existing Rx queues and need to be managed separately.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
memmove was moving bytes as the number of elements next to i, while it
should move the number of elements multiplied by the size of each element.
Fixes: e9086978 ("mlx5: support VLAN filtering")
Signed-off-by: Raslan Darawsheh <rdarawsheh@asaltech.com>
This capability is implemented but not reported.
Fixes: f3db948918 ("mlx5: support Rx VLAN stripping")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The return value in DPDK is negative errno on failure.
Since internal functions in mlx driver return positive
values need to negate this value when it returned to
dpdk layer.
Fixes: 76f5c99 ("mlx5: support flow director")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
The user is allowed to call ->rx_pkt_burst() even without free
mbufs in the pool. In this scenario we'll fail allocating a rep mbuf
on the first iteration (where pkt is still NULL). This would cause us
to deref a NULL pkt (reset refcount and free).
Fix this by checking the pkt before freeing it.
Fixes: a1bdb71a32 ("net/mlx5: fix crash in Rx")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To make the PCTYPE in x722 compatible with original PCTYPE in
flow director (FD) filters, the PCTYPE in the FD programming
descriptor needs to be translated into a different PCTYPE using
GLQF_FD_PCTYPE table.
Translation needs to be done before the FD filter is programmed.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This change enables device LRO if requested.
The current implementation of jumbo frame Rx can be used for LRO
directly without changes.
Note that since jumbo frame uses both ring0 and ring1, it cannot
be enabled in UPT (VMDirectPath) mode.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
When adding a DPDK port to ovs-vswitchd with DPDK, the vmxnet3 device
fails to activate due to mismatched magic number. This failure causes
following operations to run: start the port, stop the port,
reconfigure and re-start the port.
During reconfigure, if there is an existing memzone, driver will reuse
it. But reconfigure may request different number of Tx/Rx queues.
This results in a memzone with wrong size and potential invalid memory
access.
To fix this, free the memzone if found and reserve a new one.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Reviewed-by: Guolin Yang <gyang@vmware.com>
Reviewed-by: Daniele Di Proietto <ddiproietto@vmware.com>
Tested-by: Daniele Di Proietto <ddiproietto@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
For the vector PMD, release all mbufs from the Rx queue if no packets
are received after device start.
Fixes: 9ed94e5bb0 ("i40e: add vector Rx")
Signed-off-by: Yury Kylulin <yury.kylulin@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
For the vector PMD, release all mbufs from the Rx queue if no packets
are received after device start.
Fixes: 11b220c649 ("ixgbe: fix release queue mbufs")
Signed-off-by: Yury Kylulin <yury.kylulin@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Not sure what exactly changed and where, but I've started getting
build failures on Fedora rawhide i386:
lib/librte_ip_frag/ip_frag_internal.c:36:23: fatal error:
rte_jhash.h: No such file or directory
#include <rte_jhash.h>
^
Looking at librte_ip_frag, it clearly depends on librte_hash so
its probably more a question of something commonly masking the issue.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
This patch fixes txonly raw packets allocations by resetting the
available headroom.
Indeed, some PMDs such as Virtio might prepend some data to the
packet, resulting in mbuf's data_off field to be decremented each
time the mbuf gets re-allocated.
For Virtio PMD, it means that we use only single descriptors for the
first times mbufs get allocated, as at some point there is not
enough headroom to store the header.
Other alternative would be use standard API to allocate the packets,
which does reset the headroom, but the impact on performance is too
big to consider this an option.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Some application use rte_mbuf_raw_alloc() function to improve
performance by not resetting mbuf's fields to their default state.
This can be however problematic for mbuf consumers that need some
headroom, meaning that data_off field gets decremented after
allocation. When the mbuf is re-used afterwards, there might not
be enough room for the consumer to prepend anything, if the data_off
field is not reset to its default value.
This patch adds a new rte_pktmbuf_reset_headroom() function that
applications can call to reset the data_off field.
This patch also replaces current data_off affectations in the mbuf
lib with a call to this function.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
On error, the mempool object has to be freed, and rte_errno should be a
positive value.
Fixes: 152ca51790 ("mbuf: use default mempool handler from config")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This patch replaces the pipelined rte_hash lookup mechanism with a
loop-and-jump model, which performs significantly better,
especially for smaller table sizes and smaller table occupancies.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
In lookup bulk function, the signatures of all entries
are compared against the signature of the key that is being looked up.
Now that all the signatures are together, they can be compared
with vector instructions (SSE, AVX2), achieving higher lookup performance.
Also, entries per bucket are increased to 8 when using processors
with AVX2, as 256 bits can be compared at once, which is the size of
8x32-bit signatures.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
Move current signatures of all entries together in the bucket
and same with all alternative signatures, instead of having
current and alternative signatures together per entry in the bucket.
This will be benefitial in the next commits, where a vectorized
comparison will be performed, achieving better performance.
The alternative signatures have been moved away from
the current signatures, to make the key indices be consecutive
to the current signatures, as these two fields are used by lookup,
so they are in the same cache line.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
In order to optimize lookup performance, hash structure
is reordered, so all fields used for lookup will be
in the first cache line.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Sameh Gobriel <sameh.gobriel@intel.com>
For periodic timers, if the lag gets introduced, the current code
added additional delay when the next peridoc timer was initialized
by not taking into account the delay added, with this fix the code
would start the next occurrence of timer keeping in account the
lag added. Corrected the behavior.
Fixes: 9b15ba89 ("timer: use a skip list")
Signed-off-by: Karmarkar Suyash <skarmarkar@sonusnet.com>
Acked-by: Robert Sanford <rsanford@akamai.com>
Running secondary is tricky due to the need to map the memory region
at the right place in VM, which is whatever primary has chosen. If the
base address for primary happens to by already mapped in the
secondary, we will hit precisely these error messages (depending if we
fail on the config region or the hugepages). This is why there is
already a comment about ASLR.
The issue is that in most cases, remapping does not happen and "errno"
is not changed and therefore stale. In our case, we got a "permission
denied", which sent us down the wrong track. It's such a common error
for secondary that I feel this error message should be unambiguous and
helpful.
The call to close was also moved because close() may override errno.
Signed-off-by: Jean Tourrilhes <jt@labs.hpe.com>
When compiling with C++, it treats
void (*rte_delay_us)(unsigned int us);
as definition of the global variable.
So further linking with librte_eal fails.
Fixes: b4d63fb622 ("eal: customize delay function")
Steps to reproduce:
$ cat rttm1.cpp
using namespace std;
int main(int argc, char *argv[])
{
int ret = rte_eal_init(argc, argv);
rte_delay_us(1);
cout << "return code ";
cout << ret;
return ret;
}
$ g++ -m64 -I/${RTE_SDK}/${RTE_TARGET}/include -c -o rttm1.o rttm1.cpp
$ gcc -m64 -pthread -o rttm1 rttm1.o -ldl -Wl,-lstdc++ \
-L/${RTE_SDK}/${RTE_TARGET}/lib -Wl,-lrte_eal
.../librte_eal.a(eal_common_timer.o):
(.bss+0x0): multiple definition of `rte_delay_us'
rttm1.o:(.bss+0x0): first defined here
collect2: error: ld returned 1 exit status
$ nm rttm1.o | grep rte_delay_us
0000000000000092 t _GLOBAL__sub_I_rte_delay_us
0000000000000000 B rte_delay_us
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This feature adds vhost pmd extended statistics from per port perspective
in order to meet the requirements of the applications such as OVS etc.
RX/TX xstats count the bytes without CRC. This is different from physical
NIC stats with CRC.
The statistics counters are based on RFC 2819 and RFC 2863 as follows:
rx/tx_good_packets
rx/tx_total_bytes
rx/tx_missed_pkts
rx/tx_broadcast_packets
rx/tx_multicast_packets
rx/tx_unicast_packets
rx/tx_undersize_errors
rx/tx_size_64_packets
rx/tx_size_65_to_127_packets;
rx/tx_size_128_to_255_packets;
rx/tx_size_256_to_511_packets;
rx/tx_size_512_to_1023_packets;
rx/tx_size_1024_to_1522_packets;
rx/tx_1523_to_max_packets;
rx/tx_errors
rx_fragmented_errors
rx_jabber_errors
rx_unknown_protos_packets;
No API is changed or added.
rte_eth_xstats_get_names() to retrieve what kinds of vhost xstats are
supported,
rte_eth_xstats_get() to retrieve vhost extended statistics,
rte_eth_xstats_reset() to reset vhost extended statistics.
The usage of vhost pmd xstats is the same as virtio pmd xstats.
for example, when test-pmd application is running in interactive mode
vhost pmd xstats will support the two following commands:
show port xstats all | port_id will show vhost xstats
clear port xstats all | port_id will reset vhost xstats
net/virtio pmd xstats(the function virtio_update_packet_stats) is used
as reference when implementing the feature.
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The patch moves all stats counters to a new defined struct vhost_stats
as follows, in order to manage all stats counters in a unified way and
simplify the subsequent function implementation(vhost_dev_xstats_reset).
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In some cases when using the vHost PMD, certain vHost library functions
may still need to be accessed. One such example is the
rte_vhost_get_queue_num function which returns the number of virtqueues
reported by the guest - information which is not exposed by the PMD.
This commit introduces a new rte_eth_vhost function that returns the
'vid' associated with a given port id. This allows the PMD user to call
vHost library functions which require the 'vid' value.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Added neon based Rx vector implementation.
Selection of the new handler based neon availability at runtime.
Updated the release notes and MAINTAINERS file.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Introduced cpuflag based run-time detection to select the
SSE based simple Rx handler
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Split out SSE instruction based virtio simple Rx
implementation to a separate file
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Removed unnecessary compile time dependency on "use_simple_rxtx".
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Interestingly, clang and gcc has different prototype for _mm_prefetch().
For gcc, we have
_mm_prefetch (const void *__P, enum _mm_hint __I)
While for clang, it's
#define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))
That's how the following error comes with clang:
error: cast from 'const void *' to 'void *' drops const qualifier
[-Werror,-Wcast-qual]
_mm_prefetch((const void *)rused, _MM_HINT_T0);
/usr/lib/llvm-3.8/bin/../lib/clang/3.8.0/include/xmmintrin.h:684:58:
note: expanded from macro '_mm_prefetch'
#define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a),
0, (sel)))
What's weird is that the build was actaully Okay before. I met it while
apply Jerin's vector support for ARM patch set: he just move this piece
of code to another file, nothing else changed.
This patch fix the issue when Jerin's patchset is applied. Thus, I think
it's still needed.
Similarly, make the same change to other _mm_prefetch users, just in case
this weird issue shows up again somehow later.
Fixes: fc3d66212f ("virtio: add vector Rx")
Fixes: c95584dc2b ("ixgbe: new vectorized functions for Rx/Tx")
Fixes: 9ed94e5bb0 ("i40e: add vector Rx")
Fixes: 7092be8437 ("fm10k: add vector Rx")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Currently, when virtio_user device fails to be started (e.g., vhost
unix socket does not exit), the init function does not return struct
rte_eth_dev (and some other structs) back to ether layer. And what's
more, it does not report the error to upper layer.
The fix is to free those structs and report error when failing to
start virtio_user devices.
Fixes: ce2eabdd43 ("net/virtio-user: add virtual device")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
When virtio_user is used with VPP's native vhost user, it cannot
send/receive any packets.
The root cause is that vpp-vhost-user translates the message
VHOST_USER_SET_FEATURES as puting this device into init state,
aka, zero all related structures. However, previous code
puts this message at last in the whole initialization process,
which leads to all previous information are zeroed.
To fix this issue, we rearrange the sequence of those messages.
- step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
virtqueue structures;
- step 1, send VHOST_USER_SET_FEATURES to confirm the features;
- step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
- step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
queue;
- ...
Fixes: 37a7eb2ae8 ("net/virtio-user: add device emulation layer")
Reported-by: Zhihong Wang <zhihong.wang@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
receive any packets. This is because no queue is enabled at all when
mq is disabled.
To fix it, we should consistently make sure the 1st queue is enabled,
which is also the behaviour QEMU takes.
Fixes: 37a7eb2ae8 ("net/virtio-user: add device emulation layer")
Reported-by: Ning Li <lining18@jd.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Indirect descriptors are usually supported by virtio-net devices,
allowing to dispatch a larger number of requests.
When the virtio device sends a packet using indirect descriptors,
only one slot is used in the ring, even for large packets.
The main effect is to improve the 0% packet loss benchmark.
A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd
(fwd io for host, macswap for VM) on DUT shows a +50% gain for
zero loss.
On the downside, micro-benchmark using testpmd txonly in VM and
rxonly on host shows a loss between 1 and 4%. But depending on
the needs, feature can be disabled at VM boot time by passing
indirect_desc=off argument to vhost-user device in Qemu.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The dpdk-devbind.py script does not find/display the ifname for virtio
interfaces since the "net" directory is not directly under the device
directory but rather under a subdirectory.
eg.
> dpdk-devbind.py --status
0000:00:03.0 'Virtio network device' if= drv=virtio-pci unused=
This change searches for the first "net" directory under the device
directory hierarchy.
eg.
0000:00:03.0 'Virtio network device' if=ens3 drv=virtio-pci unused=
Fixes: 629395b063 ("igb_uio: remove PCI id table")
Signed-off-by: Gary Mussar <gmussar@ciena.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We have a stats named "size_1024_1517_packets", while the code
actually counts the range "[1024, 1518]", which is obviously wrong.
The code is as follows in the function virtio_update_packet_stats.
else if (s < 1519)
stats->size_bins[6]++;
We could either fix it by correcting the "if" check in the code,
or fix it by just renaming the stats to conform to the code. The
latter solution is taken because that's what the RFC2819 suggests.
Fixes: 76d4c652e0 ("virtio: add extended stats")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Virtio indirect descriptors are supported by the data-path
but the feature bit is never set during feature negociation.
This patch simply adds VIRTIO_RING_F_INDIRECT_DESC back to
the supported features bit mask, hence enabling the use of
indirect descriptors when the feature is negociated with the
device.
Signed-off-by: Pierre Pfister <ppfister@cisco.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
As new_device and destroy_device use an int instead of a
"struct virtio_net *", The comment about setting VIRTIO_DEV_RUNNING
doesn't make sense anymore, plus If I've correctly understand the
code, the drivers take care of setting the flag before calling the
callbacks, so I guess that this comment is obsolet and I've remove it.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
No need to use a pointer to store/retrieve features.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Invoke get_device() at the beginning of vhost_user_msg_handler, so that
we could check the return value once. Which could save tons of duplicate
get-and-check device.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Some functions are with prefix "user_", while others with "vhost_".
Making them all starting with "vhost_user_" to unify the function names.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>