eal: add shorthand __rte_weak macro
qat: update code to use __rte_weak macro
avf: update code to use __rte_weak macro
fm10k: update code to use __rte_weak macro
i40e: update code to use __rte_weak macro
ixgbe: update code to use __rte_weak macro
mlx5: update code to use __rte_weak macro
virtio: update code to use __rte_weak macro
acl: update code to use __rte_weak macro
bpf: update code to use __rte_weak macro
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The simple Tx path does not comply with the Virtio specification.
Now that VIRTIO_F_IN_ORDER feature is supported by the Virtio PMD,
let's use this optimized path instead.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Since commit efc83a1e7f ("net/virtio: fix queue setup consistency"),
when resuming a virtio port, the rx rings are refilled with new mbufs
until they are full (vq->vq_free_cnt == 0). This is done without
ensuring that the descriptor index remains a multiple of
RTE_VIRTIO_VPMD_RX_REARM_THRESH, which is a prerequisite when using the
vector mode. This can cause an out of bound access in the rx ring.
This commit changes the vector refill method from
virtqueue_enqueue_recv_refill_simple() to virtio_rxq_rearm_vec(), which
properly checks that the refill is done by batch of
RTE_VIRTIO_VPMD_RX_REARM_THRESH.
As virtqueue_enqueue_recv_refill_simple() is no more used, this
patch also removes the function.
Fixes: efc83a1e7f ("net/virtio: fix queue setup consistency")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
The mbuf->data_off was was not properly set for the first received
mbufs. Fix this by setting it in virtqueue_enqueue_recv_refill_simple(),
which is used to enqueue the first mbuf in the ring.
The function virtio_rxq_rearm_vec(), which is used to rearm the ring
with new mbufs, is valid and does not need to be updated.
Fixes: cab0461234 ("virtio: fill Rx avail ring with blank mbufs")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Create a rte_ethdev_driver.h file and move PMD specific APIs here.
Drivers updated to include this new header file.
There is no update in header content and since ethdev.h included by
ethdev_driver.h, nothing changed from driver point of view, only
logically grouping of APIs. From applications point of view they can't
access to driver specific APIs anymore and they shouldn't.
More PMD specific data structures still remain in ethdev.h because of
inline functions in header use them. Those will be handled separately.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch adds dev_pause, dev_resume and inject_pkts APIs to allow
driver to pause the worker threads and inject special packets into
Tx queue. The next patch will be based on this.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Replace the BSD license header with the SPDX tag for files
with only an Intel copyright on them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The memzone header is often included without good reason.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The mbuf->port was was not properly set for the first received
mbufs. Fix this by setting it in virtqueue_enqueue_recv_refill_simple(),
which is used to enqueue the first mbuf in the ring.
The function virtio_rxq_rearm_vec(), which is used to rearm the ring
with new mbufs, is correct and does not need to be updated.
Fixes: cab0461234 ("virtio: fill Rx avail ring with blank mbufs")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
Current virtio_dev_stop only disables interrupt and marks link down,
When it is invoked, tx/rx traffic flows still work. This is a strange
behavior. The patch supports the switch of flow.
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Split out SSE instruction based virtio simple Rx
implementation to a separate file
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Removed unnecessary compile time dependency on "use_simple_rxtx".
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Interestingly, clang and gcc has different prototype for _mm_prefetch().
For gcc, we have
_mm_prefetch (const void *__P, enum _mm_hint __I)
While for clang, it's
#define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))
That's how the following error comes with clang:
error: cast from 'const void *' to 'void *' drops const qualifier
[-Werror,-Wcast-qual]
_mm_prefetch((const void *)rused, _MM_HINT_T0);
/usr/lib/llvm-3.8/bin/../lib/clang/3.8.0/include/xmmintrin.h:684:58:
note: expanded from macro '_mm_prefetch'
#define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a),
0, (sel)))
What's weird is that the build was actaully Okay before. I met it while
apply Jerin's vector support for ARM patch set: he just move this piece
of code to another file, nothing else changed.
This patch fix the issue when Jerin's patchset is applied. Thus, I think
it's still needed.
Similarly, make the same change to other _mm_prefetch users, just in case
this weird issue shows up again somehow later.
Fixes: fc3d66212f ("virtio: add vector Rx")
Fixes: c95584dc2b ("ixgbe: new vectorized functions for Rx/Tx")
Fixes: 9ed94e5bb0 ("i40e: add vector Rx")
Fixes: 7092be8437 ("fm10k: add vector Rx")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The support of virtio-user changed the way the mbuf dma address is
retrieved, using a physical address in case of virtio-pci and a virtual
address in case of virtio-user.
This change introduced some possible memory corruption in packets,
replacing:
m->buf_physaddr + RTE_PKTMBUF_HEADROOM
by:
m->buf_physaddr + m->data_off (through a macro)
This patch fixes this issue, restoring the original behavior.
By the way, it also rework the macros, adding a "VIRTIO_" prefix and
API comments.
Fixes: f24f8f9fee ("net/virtio: allow virtual address to fill vring descriptors")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This is for target i686-native-linuxapp-gcc and gcc6,
Compilation error is:
In file included from
include/rte_mempool.h:77:0, from
drivers/net/virtio/virtio_rxtx_simple.c:
In function `virtio_xmit_pkts_simple':
include/rte_memcpy.h:551:2: error:
array subscript is above array bounds
rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Call stack is as following:
virtio_xmit_pkts_simple
virtio_xmit_cleanup
rte_mempool_put_bulk
rte_mempool_generic_put
__mempool_generic_put
rte_memcpy
The array used as source buffer in virtio_xmit_cleanup (free) is a
pointer array with 32 elements, in 32bit this makes 128 bytes.
in rte_memcpy() implementation, there a code piece as following:
if (size > 256) {
rte_move128(...);
rte_move128(...); <--- [1]
....
}
The compiler traces the array all through the call stack and knows the
size of array is 128 and generates a warning on above [1] which tries to
access beyond byte 128.
But unfortunately it ignores the "(size > 256)" check.
Giving a hint to compiler that variable "size" is related to the size of
the source buffer fixes compiler warning.
Fixes: 863bfb4744 ("mempool: optimize copy in cache")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In the following loop:
while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
...
}
There is no external function call or any explict memory barrier
in the loop, the re-read of used->idx might be optimized and only
be retrieved once.
Use of voaltile normally should be prohibited, and access_once
is Linux kernel's style to handle this issue; Once we have that
macro in DPDK, we could change to that style.
virtio_recv_mergable_pkts might also have the same issue, so fix
it as well.
Fixes: 823ad64795 ("virtio: support multiple queues")
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch is related to how to calculate relative address for vhost
backend.
The principle is that: based on one or multiple shared memory regions,
vhost maintains a reference system with the frontend start address,
backend start address, and length for each segment, so that each
frontend address (GPA, Guest Physical Address) can be translated into
vhost-recognizable backend address. To make the address translation
efficient, we need to maintain as few regions as possible. In the case
of VM, GPA is always locally continuous. But for some other case, like
virtio-user, GPA continuous is not guaranteed, therefore, we use virtual
address here.
It basically means:
a. when set_base_addr, VA address is used;
b. when preparing RX's descriptors, VA address is used;
c. when transmitting packets, VA is filled in TX's descriptors;
d. in TX and CQ's header, VA is used.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We keep a common vq structure, containing only vq related fields,
and then split others into RX, TX and control queue respectively.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
[Jianfeng Tan: found and fixed 2 bugs]
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Macros RTE_MBUF_DATA_DMA_ADDR and RTE_MBUF_DATA_DMA_ADDR_DEFAULT
are defined in each PMD driver file. Convert macros to inline
functions and move them to common lib/librte_mbuf/rte_mbuf.h file.
PMD drivers include rte_mbuf.h file directly/indirectly hence no
additioanl header file inclusion is necessary.
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The mergeable virtio net hdr format has been the standard and the
only virtio net hdr format since virtio 1.0. Therefore, we can
not hardcode hdr_size to "sizeof(struct virtio_net_hdr)" any more
at virtio_recv_pkts(), otherwise, there would be a mismatch of
hdr size from rte_vhost_enqueue_burst() and virtio_recv_pkts(),
leading a packet corruption.
Instead, we should retrieve it from hw->vtnet_hdr_size; we will
do proper settings at eth_virtio_dev_init() in later patches.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Bulk free of mbufs when clean used ring.
Shift operation of idx could be saved if vq_free_cnt means
free slots rather than free descriptors.
TODO: rearrange vq data structure, pack the stats var together so that
we could use one vec instruction to update all of them.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
With fixed avail ring, we don't need to get desc idx from avail ring.
virtio driver only has to deal with desc ring.
This patch uses vector instruction to accelerate processing desc ring.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Add software RX ring in virtqueue.
Add fake_mbuf in virtqueue for wraparound processing.
Fill avail ring with blank mbufs in virtio_dev_vring_start
Add virtio_rxtx.h header file for RTE_VIRTIO_PMD_MAX_BURST.
Would move all rx/tx related declarations into this header file in future.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>